Avoid focusing on underlying hardware, Nvidia binds machine learning to GPU

Source: Internet
Author: User
Keywords Large data machine learning Nvidia deep learning CUDNN

"Editor's note" Nvidia links GPU to machine learning more closely with the release of the Cudnn library, while achieving direct integration of the CUDNN and depth learning frameworks, allowing researchers to seamlessly utilize the GPU on these frameworks, ignoring low-level optimizations in the deep learning system, Focus more on more advanced machine learning issues.

The following translation

In recent days, by releasing a set of libraries called CUDNN, nvidia links the GPU to machine learning more closely. It is reported that CUDNN can be directly integrated with the current popular depth learning framework. Nvidia promises that CUDNN can help users focus more on the depth of neural networks and avoid the drudgery of optimizing hardware performance.

At present, the depth of learning has been more and more large network companies, researchers, and even start-up companies to promote AI capabilities, representative of the computer vision, text retrieval and speech recognition. Graphics processing units (GPU) are used in popular fields, including computer vision, because each GPU contains thousands of cores that can speed up compute-intensive algorithms.

Nvidia learned that CUDNN, based on the company's Cuda parallel programming language, can be integrated with a variety of depth learning frameworks without involving the model. A spokesman for Nvidia said more news:

through research in mainstream machine learning frameworks such as Caffe, Theano, and Torch7, CUDNN allows researchers to seamlessly utilize the GPU's capabilities in these frameworks and set aside future space for development. For example: Consolidating CUDNN in Caffe is invisible to end users and requires very simple setup to do this, and Plug and Play is the core design factor for CUDNN.





from a more technical standpoint, CUDNN is a low-level library that can be invoked in Host-code without any cuda code, much like the Cuda Cublas and CUFFT libraries we have developed. By CUDNN, users need not care about the underlying optimizations of the previous deep learning system, they can focus on more advanced machine learning problems and drive the development of machine learning. Also based on CUDNN, the code will run at a faster rate.

Whether for future growth or the long-term goal of "GPU not only for computer graphics rendering," Nvidia is very active in embracing deep learning and machine learning. The current use of the GPU is already widespread, and organizations use it to replace CPUs for higher speeds and lower costs.

However, there are still some specific factors that inhibit the long-term development of the CPU. One of these is the alternative architecture, such as IBM's synapse and the efforts of some startups like Nervana Bae, for example, designed specifically for neural networks and machine learning loads. The other is the existing processor architecture, including CPUs and FPGAs, that has allowed people to see the capabilities of future machine learning loads.

Although many cloud providers now offer in-depth learning through the form of services, the depth of machine learning is still far from being mainstreamed into the mainstream.

Original Link: Nvidia stakes its claim in deep learning by making its GPUs easier to program (Compile/Zhonghao revisers/wei)

Free Subscription "CSDN cloud Computing (left) and csdn large data (right)" micro-letter public number, real-time grasp of first-hand cloud news, to understand the latest big data progress!

CSDN publishes related cloud computing information, such as virtualization, Docker, OpenStack, Cloudstack, and data centers, sharing Hadoop, Spark, Nosql/newsql, HBase, Impala, memory calculations, stream computing, Machine learning and intelligent algorithms and other related large data views, providing cloud computing and large data technology, platform, practice and industry information services.




Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.