"Editor's note" with the development of artificial intelligence technology, the major technology companies have increased their investment in deep learning, and as the National Science Foundation is the same, now, it through the funding of the United States University researchers, to promote the depth of learning algorithms on the FPGA and super computer running. Although it is only a trend that represents the depth of learning, but with the business operations of major technology companies and more in-depth study into the University Research Center and the National Laboratory, the development of in-depth learning to play a positive role in promoting.
The following is the original text:
Machine learning has made great strides in the past few years, thanks in large part to the development of new technologies for computational-intensive workloads. NSF's latest funding seems to imply that what we're seeing may be just the tip of the iceberg, as researchers try to extend similar depth-learning techniques to more computers and new types of processors.
A particularly interesting project implemented by the team at Stony Brook University in New York, which aims to prove that FPGA (Field Programmable gate array) is superior to the GPU, finds that the depth learning algorithm can run faster and more efficiently on the FPGA, which breaks through the current traditional understanding.
According to the project summary:
The researchers predict that the slowest part of the algorithm on the GPU will achieve significant acceleration on the FPGA, and that the fastest part of the algorithm on the GPU has similar performance on the FPGA, but the power consumption can be very low.
In fact, in addition to being different from the GPU, it is not novel to run these models on the hardware, for example, IBM has recently relied on a new brain-inspired chip hit, which claims to be perfectly capable of supporting neural networks and other cognitive-inspired workloads. Microsoft demonstrated its Adam project in July this year, and the project has revamped a popular depth learning technology to run on a general-purpose Intel CPU processor.
With its customizable features, FPGA has a unique advantage, and this June, Microsoft explained how it could speed up Bing search by unloading some of the process parts to the FPGA. Later in the month, at Gigaom's Businessesflat-out conference, Intel announced that the upcoming hybrid chip architecture would put FPGA in the CPU (in fact they would share memory), mainly for professional data loads and Microsoft Bing.
However, FPGA is not the only and potential infrastructure choice for the deep learning model. NSF has also funded researchers at New York University, let them test the depth learning algorithm and other workloads via Ethernet remote Direct Memory access technology, which is widely used on supercomputers, but now to take it to enterprise systems, the RDMA connector avoids CPU, The latency that switches and other components bring to the process, which speeds up data transfer between computers.
Speaking of supercomputers, another new NSF-funded project, led by Andrew Ng and supercomputer experts at Stanford University (Baidu and Coursera), and Jack Dongarra of the University of Tennessee and Geoffrey Fox at Indiana University, by machine learning experts, Designed to make the depth learning model take advantage of Python programmability and take it to supercomputers and extended cloud systems. The project, which has received nearly 1 million dollars in NSF funding, is known as rapid Python Deep Learning infrastructure.
Rapydli will be built into a set of open source modules that can be accessed from the Python user interface, but can be safely analyzed and visualized through interactive analysis and visualization in the largest supercomputer or cloud C + + or Java environments. Rapydli will support the GPU accelerator and the Intel Phi Coprocessor as well as a wide range of storage technologies including files, NoSQL, HDFs, and databases.
All the work currently done is to make it easier to learn algorithms and improve their performance, and these three projects are only a small part of it, but it would be useful if they could be used by the technology giants in the business world, or into research centers and national laboratories to use computers to solve real complex problems.
Original link: Researchers Hopenhagen deep learning algorithms can run on FPGAs and supercomputers (Compile/Wei revisers/Zhonghao)
Free Subscription "CSDN cloud Computing (left) and csdn large data (right)" micro-letter public number, real-time grasp of first-hand cloud news, to understand the latest big data progress!
CSDN publishes related cloud computing information such as virtualization, Docker, OpenStack, Cloudstack, data center, sharing Hadoop, Spark, Nosql/newsql, HBase, Impala, Large data viewpoints, such as memory calculation, stream computing, machine learning and intelligent algorithms, provide services such as cloud computing and large data technology, platform, practice and industry information. &NBSP
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.