References:
The Best of the 20th Century: Editors Name Top 10 Algorithms.
By Barry A. CIPRA.Address: Http://www.uta.edu/faculty/rcli/topten/topten.htm.
Bloggers:
1. The top ten algorithms in the 20th century,Quick Sorting Algorithm, Or fastFourier Transform AlgorithmFor other algorithms, you only need to know more about them.
2. This Article is not the latest article. I am only interested in algorithms, so I also translate and study it.
====================================
Several algorithm masters who have invented the top ten Algorithms
I. 1946 Monte Carlo Method
[1946: John von norann, Stan Ulam, and Nick metropolis, all at the Los Alamos scientific laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo method.]
In 1946, John von norann, Stan Ulam, and Nick metropolis, three scientists at the Las Vegas National Laboratory
Co-invention, called Monte Carlo method.
Its specific definition is:
Draw a square with a one-meter-long side on the square, and draw an irregular shape in the square with chalk,
How do I calculate the area of this irregular image?
The Monte Carlo method tells us that n (n is a large natural number) soy beans are evenly distributed to the square,
Then count the number of soy beans in this irregular shape, for example, M,
The area of this strange shape is similar to M/N. The larger the N value, the more accurate the calculated value.
Here we want to assume that all beans are on a plane and there is no overlap between them. (Soy bean is just a metaphor .)
The Monte Carlo method can be used to calculate the circumference rate in an approximate way:
Let the computer randomly generate two numbers between 0 and 1 each time to check whether these two real numbers are in the unit circle.
Generate a series of random points, and calculate the number of points in the unit circle and the total number of points. The ratio of inner circular area to square area is PI: 4, and PI is the circumference rate.
(Thanks to qilihe, a netizen pointed out: S inner circle: S positive = PI: 4. For more information, see the following 99th comments.16),
When more random points are obtained (but even if the first four random points of 10 are obtained, the result is only consistent with the circumference rate,
The closer the result is to the circumference rate.
Ii. 1947 simple form Method
[2, 1947: George Dantzig, at the RAND Corporation, creates the simplex method for linear programming.]
In 1947, Grorge Dantzig of randcompany invented the simple form method.
Since then, the simple form method has become an important cornerstone of linear planning.
The so-called linear planning, simply put, is to give a set of linear (all variables are a power) constraints.
(For example, a1 * x1 + b1 * x2 + c1 * x3> 0), calculate the extreme value of a given target function.
This may seem too abstract, but examples that can be used in reality are not uncommon-for example, for a company, the human and material resources that can be put into production are limited ("Linear Constraints"), and the company's goal is to maximize profits ("the maximum value of the objective function"). See, linear planning is not abstract!
As part of operational research, linear planning has become an important tool in the field of management science.
Dantzig's simple form method is an extremely effective method for solving problems similar to linear programming.
3. 1950 Krylov subspace iteration method
[1950: Magnus hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for numerical analysis at the National Bureau of Standards, initiate the development of Krylov subspace iteration methods.]
1950: at the Institute of Numerical Analysis of the National Bureau of Standards
The Lanczos of Krylov invented the iterative method of Krylov.
The Krylov sub-space iteration method is used to solve the equation, for example, Ax = B. A is an N * n matrix. When n is sufficiently large, the direct calculation becomes very
But the Krylov method cleverly converts it into an iterative form of kxi + 1 = kxi + B-Axi for solving.
Here, K (from the first letter of the Russian Nikolai Krylov surname) is a matrix constructed close to,
The magic of iterative algorithms is that they simplify complex problems into stages of easy computing.
Iv. Decomposition Method of matrix computing in 1951
[1951: Alston householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations.]
In 1951, Alston householder of the National Laboratory of Alstom Oak Ridge proposed the decomposition method of matrix computing.
This algorithm proves that any matrix can be divided into triangles, diagonal pairs, orthogonal pairs, and other special forms of matrices,
The significance of this algorithm makes it possible to develop flexible matrix computing software packages.
V. 1957 optimized Fortran Compiler
[1957: John Backus leads a team at IBM in developing the Fortran Optimizing Compiler.]
1957: the IBM team led by John Bakos, created the Fortran optimization compiler.
Fortran, also translated as Fu Chuan, is a combination of Formula Translation, meaning "Formula Translation ".
It is the first advanced programming language in the world that has been officially adopted and circulated so far.
This language has now evolved to Fortran 2008, and is well known.
Vi. QR Algorithm for calculating matrix feature values from 1959 to 61
[1959-61: J. G.F. Francis of Ferranti Ltd, London, finds a stable method for computing
Eigenvalues, known as the QR algorithm.]
1959-61: J. G.F. Francis of London felentty Co., Ltd. found a stable feature value calculation method,
This is the famous QR algorithm.
This is also an algorithm related to linear algebra. If you have learned linear algebra, you should remember "The feature value of the matrix". The feature value is calculated by the matrix.
One of the core contents is that the traditional solution involves root of higher-order equations, which is very difficult when the problem is large.
The QR algorithm splits the matrix into an orthogonal matrix (you want to read this article and know what an orthogonal matrix is. : D .) Product of an upper triangle matrix,
Similar to the Krylov method mentioned above, this is an iterative algorithm that simplifies the root problem of complex high-order equations into a phase easy
The substeps of calculation make it possible to use a computer to solve large-scale matrix feature values.
The algorithm is written by J. G.F. Francis from London, England.
VII. 1962 fast Sorting Algorithm
[1962: Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort.]
1962: Tony elitot Brothers Limited in London, Hall proposed a fast sorting.
Haha, congratulations, you finally saw the first familiar algorithm ~.
As a classic algorithm in the sorting algorithm, the quick sorting algorithm is everywhere.
The quick sorting algorithm was first designed by Sir Tony Hoare. Its basic idea is to divide the columns to be sorted into two halves,
The half on the left is always "small", and the half on the right is always "big". This process continues recursively until the entire sequence is ordered.
Speaking of this Sir Tony Hoare, the fast sorting algorithm is just a small accidental discovery. His contribution to the computer mainly includes
He also received the 1980 Turing Award for his achievements in the theory of formal methods and the invention of the algol60 programming language.
The average time complexity of fast sorting is only O (nlog (N). Compared with general selection sorting and Bubble sorting,
It is indeed a historic innovation.
VIII. 1965 Fast Fourier Transformation
[1965: James Cooley of the IBM t. J. Watson research center and John Tukey of Princeton
University and at&t Bell Laboratories unveil the fast Fourier transform.]
1965: James Cooley from IBM Watson Research Institute, and John Tukey from Princeton University,
At&t Bell Labs jointly launched the fast Fourier transformation.
The Fast Fourier algorithm is a fast algorithm of Discrete Fourier algorithm (which is the cornerstone of Digital Signal Processing). Its time complexity is only O
(Nlog (n), which is more important than time efficiency, is that the fast Fourier algorithm is very easy to implement using hardware. Therefore, it is obtained in the field of electronic technology.
It is widely used.
In the future, I willClassic Algorithm Research SeriesFocuses on this algorithm.
9. Integer relationship detection algorithm 1977
[1977: Helaman Ferguson and rosydney Forcade of briham Young University advance an integer
Relation detection algorithm.]
1977: Helaman Ferguson and roundun Forcade of Birmingham University proposed the integer relationship of the Forcade detection algorithm.
Integer relationship detection is an old problem. Its History can even be traced back to the Euclidean era. Specifically:
Given-group real numbers X1, X2 ,..., xn, whether there are all zero integers a1, a2 ,... an, so that: a1 x 1 + a2 x2 +... + an x
N = 0?
Helaman Ferguson and rosydney Forcade from BrighamYoung University solved this problem this year.
This algorithm is applied to "Simplifying the calculation of Feynman graphs in quantum field theory ". OK. You can understand it. : D.
10. 1987 fast multi-pole Algorithm
[1987: Leslie Greengard and Vladimir Rokhlin of Yale University invent the fast multipole
Algorithm.]
1987: Leslie's Greengard and Rokhlin from Yale University invented the fast multi-pole algorithm.
This fast multi-pole algorithm is used to calculate the exact motion of N particles interacting by gravity or power.
-- For example, the stars in the Milky Way, or the interaction between atoms in the protein ". OK.
.
Address: http://blog.csdn.net/v_JULY_v/article/details/6127953