The GPU represents a graphics processing unit, but there are other uses for these tiny chips in addition to working with graphics. For example, Google uses the GPU to model the human brain, and Salesforce relies on the GPU to analyze Twitter-based microblogging data streams. The GPU is well suited for parallel processing operations, which means performing thousands of tasks at the same time. How do you do it? You have to develop a new software that allows it to tap the potential of a GPU chip. Eric Holk, a computer doctor at Indiana University in the United States, recently made an attempt to develop an application to run the GPU. "GPU programming still requires programmers to manage a number of low-level details that are decoupled from the main tasks performed by the GPU," said Hulk. We want to develop a system that helps programmers manage these details so that the GPU is still performing well while improving productivity. ”
In general, computer computing tasks are mostly done by the CPU. A CPU processes a computed sequence, the so-called one-time processing of a thread, which must be executed as quickly as possible. The GPU is designed to handle multiple threads at one time, and these threads are much slower to handle, but the program can take advantage of parallelism to perform faster, just like a supercomputer.
Today, the CPU has been able to perform parallel operations, and multicore is popular, but they are mostly optimized for single-threaded processes, says Hulk.
GPU terminology did not appear until 1999, but there were earlier video processing chips, which were launched in 1970-1980. At the time, video processing chips heavily dependent on the CPU for graphics processing, the 1990 's graphics card more popular, but also more powerful, mainly because the 3D graphics card appeared.
Chris McClanahan, of the University of Science and Technology in Georgia, Mclanahan that the GPU hardware architecture has evolved, previously a specific single core, and now transforms into a set of high-parallel, programmable cores that can be used to handle more general calculations. Without a doubt, with the development of GPU technology, it will add more programmability, more parallelism, become more and more like CPU, can be used for general computing. Mclanahan says the CPU and GPU will eventually converge. At the same time, developers are starting to tap the capabilities of the GPU for different applications, including physical system modeling, strengthening of smartphones, and more.
"The GPU's memory bandwidth is also much higher than the CPU, and it is much more efficient when it comes to simple calculations of massive amounts of data," explains Hulk. ”
There are already some GPU programming languages available, including Cuda and OpenCL. Hank developed the new language Harlan, which controls the GPU. In fact, Harlan is compiled into OpenCL. But unlike other languages, the Harlan language is more abstract than the advanced programming language, such as Python and Ruby. "Another goal of Harlan is to answer the question: If you develop a language from the start, its initial goal is to support GPU programming, what would that be?" said Hulk. Most of the current systems embed GPU programming into existing languages, and developers have to deal with all the problems of older languages. Harlan allows developers to make better decisions about the target hardware and procedures. ”
The Harlan Grammar is based on scheme, which is a modern variant of the Lisp language, and scheme is the ancestor of all good languages. In order to make the programming language more "normal", Hulk also used the Rust language, which is mainly for the development system, it can operate the hardware underlying. The purpose of the Hulk is to make the code written by programmers more efficient, because Harlan can produce better GPU code.
Time:2013-07-05 09:07
Source:Sohu it
Author:Sohu it
Editor:Admin