Nvidia Tesla K40

Source: Internet
Author: User

The acceleration ratio is determined by a number of factors, software, hardware, algorithms, programmer level almost all deep learning researchers are using the GPU familiar with deep learning people know that deep learning is needed to train, so-called training is in thousands of variables to find the best value of the calculation. This requires constant attempts to achieve convergence, and the resulting values are not artificially determined numbers, but rather a normal formula. Through this pixel-level learning, and constantly summarize the law, the computer can be realized like people thinking. Today, almost all of the deep Learning (machine learning) researchers are using GPUs for relevant research. Of course, I said "almost". In addition to the GPU, there are different solutions for both mic and FPGA. How does nvidia look at the impact of different hardware architectures on deep learning and how to evaluate them? Loo Huaping, Director of Solution architecture Engineering at Nvidia China, said: "Technology development and technology development are required to participate in different technologies." Whether GPU or FPGA or dedicated neural network chip, its main purpose is to promote deep learning (machine learning) in this direction of technological development. So in the early days, we can really try different techniques to explore which technology is better suited to this application. At present, the intensive use of deep learning is mainly focused on training. So in this area, the GPU is really a good fit, which is also reflected in all of these industry's big guys such as bat, Google, Facebook and so on, are using the GPU is doing training.   "In addition to training, in practical applications, Nvidia is also combining the power and network features commonly available in the IDC room in China," considering whether to design a low-power GPU to meet the needs of users. " In addition to hardware considerations, Nvidia technology manager Rai Junjie also answers the software-related questions about the value of GPUs for deep learning applications. First, from the perspective of the development tools of deep learning applications, the CUDA-supported GPU provides a good starting platform for users to learn Caffe, Theano and other research tools. In fact, the GPU is not just about Tesla, which is focused on HPC, and the GPU, including GeForce, can support Cuda computing, which also provides a relatively low threshold for beginners. In addition, Cuda in the algorithm and program design compared to other applications more easily, through the extension of Nvidia for many years also accumulated a wide range of user groups, the development of less difficult. Finally, the deployment process, the GPU through the Pci-e interface can be directly deployed in the server, convenient and fast.   Thanks to the advantages of hardware support and software programming, the GPU has become the most widely used platform today. Is there a bottleneck in deep learning development-we use the GPU to accelerate deep learning because deep learningThe amount of data to be calculated is unusually large, and it takes a long time to use traditional computing methods. But if the amount of data in the future deep learning is declining, or if we can't provide enough data for deep-learning research, does that mean deep learning is going to go into "winter"-and Rai Junjie also puts forward another view. "It takes a lot of models to do deep neural network training before we can achieve the convergence of mathematics." Deep learning to really close to adult intelligence, it needs a very large number of neural networks, it needs more data than we do language recognition, image processing is much more.   Suppose we find that we have no way of providing such data, and it is very likely that winter will occur. " But he added that, as seen today, deep learning is in the process of booming. For example, we are mainly doing more mature voice, image aspect, the whole data volume is still increasing, the network scale is also constantly become complex.   Now I have no way to predict whether there will be one day the data is really not enough. For NVIDIA, deep learning is a great time to develop GPU computing and a new business growth point following HPC. As Pandey mentioned, Nvidia brings successful experiences from all over the world to China, including foreign success stories, good relationships with partners, and so on, to help Chinese customers grow rapidly. "Because it is the era of the Internet, there is no cross-border era, we are all the same." "The K40 memory frequency also increased from the previous 5.2GHz to 6GHz, the memory bit width is still 384bit. Bandwidth has increased from previous 250gb/s to 288gb/s, but TDP continues to maintain the k20x level of 235W, and the overall control is very good. The biggest change is memory capacity, before k20x standard is 6GB, this K40 with 12GB video memory, but the number of video memory particles did not increase, because Nvidia this time using 4GB video memory, previously including desktop and Tesla, Quadro product line The memory particles used are 2Gb capacity, so the total capacity of the Tesla K40 is increased to 12GB (24X4GB), while the k20x is only 6GB (24X2GB) in the case of maintaining 24 video memory. There is also a notable change that Tesla K40 finally achieved PCI-E 3.0 support, although before the Tesla k nvidia Tesla K40 Price

Nvidia Tesla K40

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.