&http://www.aliyun.com/zixun/aggregation/37954.html ">nbsp; Distributed computing is a computational science that utilizes the idle processing capacity of the computer on the Internet to solve large computational problems. Now let's see how it works:
First, find a problem that requires very large computational power to solve. Such problems are generally interdisciplinary, challenging, and human needs to solve the subject of scientific research. Among the more famous are:
1. Solve more complex mathematical problems, such as: GIMPS (looking for the largest number of Mason).
2. Research to find the most secure password system, such as: RC-72 (password cracking).
3. Biological pathology studies, such as: folding@home (Study of protein folding, misunderstanding, polymerization and related diseases caused by it).
4. Drug research on a wide range of diseases, such as: United Devices (finding effective drugs to fight cancer).
5. Signal processing, for example: SETI@Home (Looking for extraterrestrial civilization at home).
These practical examples show that these projects are huge and require an astonishing amount of computing, and that it is impossible for a single computer or individual to compute it in a time that can be accepted. In the past, these problems should have been solved by supercomputers. However, the cost and maintenance of supercomputers is very expensive, which is not an ordinary scientific research organization can afford. With the development of science, a cheap, efficient and easy to maintain computing method came into being--distributed computing!
With the popularization of computers, personal computers began to enter thousands of households. With it comes the use of computers. More and more computers are idle, even in the boot state the potential of the CPU is far from being fully utilized. We can imagine that a home computer spends most of its time "waiting". Even when users actually use their computers, the processor is still consuming, still countless waits (waiting for input, but not actually doing anything). The advent of the internet makes it a reality for the connection to invoke all the computer systems that have limited computing resources.
So, some of the problems that are very complex but well suited to divide into smaller pieces of computing are presented, and then a research organization develops computing services and clients through a lot of hard work. The service side is responsible for dividing the computing problem into a lot of small computational parts, then allocating these parts to many networked computers for parallel processing, and finally combining the results to get the final results.
Of course, this may seem primitive and difficult, but as the number of participants and computers involved is increasing, the calculation plan becomes very rapid and proven to be practical. At present, some large distributed computing projects have the ability to handle even the fastest supercomputer in the world.
You can also choose to participate in some projects to donate CPU kernel processing time, and you will find that the CPU kernel processing time you provide will appear in the project's contribution statistics. You can compete with other participants to contribute time rankings, or you can join an existing computing group or build a computing group yourself. This approach is conducive to mobilizing the enthusiasm of the participants.
Grid computing, which has always belonged to scientists and engineers, has finally moved to the mainstream. CIOs have to analyze their apps to see if the enterprise benefits from grid functionality and benefits.
Derivatives may be a magic wand for money managers. With proper use, such complex financial contracts can control risk and thus help maintain profits, but pricing is the key.
For derivatives sellers such as Vahovia, assessing risk and determining prices is not something that can be done with a flick of the wand. The derivative modeling software is quite complex and requires a large number of hypothetical scenarios to be run to determine the closing price and to calculate the risk status of the derivative portfolio. This analysis is typically performed on a multiprocessor-large UNIX machine, which can take as long as 9 hours. Upgrading the hardware to solve the problem is not realistic. "After the upgrade, the calculation time can be shortened from 9 hours to 4.5 hours," said Mark Cates, chief technology officer at Vahovia's corporate finance and investment banking business, "but we need to run it within 1 hours." ”
The solution is not to buy expensive hardware, but to take advantage of cheap hardware. Vahovia Company put hundreds of already deployed desktop machines into a grid, making the most of the available processing time for each machine results in an astonishing effect: the work that was done in the past day or night can now be fixed in an hour, greatly reducing the risk and pricing decision process.
Cates says that compared to upgrading a large UNIX environment, grid solutions require a small amount of cost and can achieve better results. "We found that processing has soared 10 to 20 times times, and the cost is only 25% of the hardware upgrade scenario," he said. ”
Vahovia companies are not taking the risk of adopting advanced technology. Thanks to the advances in hardware and software technology, many companies have started using grid tools. Enterprise users, especially the financial services industry users see a lot of the benefits of the grid: faster response, new product Time-to-market shortened, the calculation of the function unit price reduction. Although the grid has to be mainstream, there are still some hurdles to overcome (many apps are simply not able to migrate to the grid at the moment), but the grid is no longer a tool for technicians to decipher genomes or design aircraft wings.