[out] Intel and NVIDIA fight technology contest gamble future (video) wharfedale

[out] Intel and NVIDIA fight technology contest gamble future Tencent digital news (Zhou Shuo) recently digital circle events may have been many, most spectators go to apple to attract attention, iPhone 7 without expectation, but I always slobber war in the observation of NVIDIA and Intel and their behind the fight. Big data and artificial intelligence into giants scramble for meat and potatoes with intelligent hardware equipment and the explosive growth of data in speed beyond your imagination. What are the data used for? Each intelligent hardware equipment companies are looking forward to maximize the use of product data, the user’s habits of analysis and processing, and thus help products to further enhance the level of intelligence. This is not only the needs of a series of smart wrist watch needs, smart home, unmanned, robot in the future this will be the face of all aspects of life. As the foundation of artificial intelligence, deep learning and other aspects of the processor, GPU algorithm and how to layout collocation in order to gain market initiative and thus to win the future, this is Intel and NVIDIA have to weigh things. In the IDF held just last month, Intel is the theme of the conference is fully directed to the big data, artificial intelligence, depth of study these hot spots. After several big acquisitions also see Intel’s determination. The problem is NVIDIA won’t let Intel comfortably, frequently move in this area, the Intel NVIDIA was really very nervous. NVIDIA trumpeted GPU in deep learning on rolling CPU why ever do GPU NVIDIA and CPU Intel can Each one minds his own business., but also happy cooperation, but now increasingly tit for tat. This is mainly because of large data processing, artificial intelligence, depth of study in this field of hardware is dependent on the difference between traditional computers. According to NVIDIA, GPU is CPU times the efficiency of the learning process in depth, and even the Intel Xeon processor seckill proud proud. In the field of artificial intelligence, chip processing efficiency and optimization function can be said to be the 50%, but at the chip level, the industry is more common GPU learning algorithm has the advantage in artificial intelligence, depth is much higher than that of CPU. It is for this reason that NVIDIA can be strong in artificial intelligence and deep learning. In general, the face of CPU GPU, there are 4 main advantages: 1, GPU was born to optimize parallel computing, and CPU is inherently a serial instruction optimization, artificial intelligence just need more powerful parallel capabilities. 2, the same chip area, GPU can be integrated more computing units. 3, GPU energy consumption is much lower than CPU. 4, GPU has a larger capacity of the storage structure, a large number of data cache advantage. In the first half, NVIDIA just for the depth of the neural network launched the Tesla P100 GPU, and based on the development of a deep learning supercomputer NVIDIA DGX-1. From the media exposure photos, Jen-hsun Huang personally signed the DGX-1 super computer delivered to the name of Mask.相关的主题文章: