Top ten famous computer algorithms in the 20th Century

Source: Internet
Author: User

I. 1946 Monte Carlo Method

[1946: John von norann, Stan Ulam, and Nick metropolis, all at the Los Alamos scientific laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo method.]

In 1946, John von norann, Stan Ulam, and Nick metropolis, three scientists at the Las Vegas National Laboratory, were co-invented, known as the Monte Carlo method.

Its specific definition is:

Draw a square with a one-meter-long side on the square and draw an irregular shape with a pink stroke inside the square. Now we need to calculate the area of this irregular image. How do we calculate the column? The Monte Carlo method tells us that n (n is a large natural number) soy beans are evenly distributed to the square, then count how many soy beans are inside this irregular geometric shape. For example, if there are m beans, the area of this strange shape is similar to M/N. The larger the N is, the more accurate the calculated value is. Here we want to assume that all beans are on a plane and there is no overlap between them.

The Monte Carlo method can be used to calculate the circumference rate: let the computer randomly generate the number between two 0 and 1 each time to check whether the two real numbers are in the unit circle. Generate a series of random points and calculate the points and total points in the Unit Circle (the ratio of the circular area to the square area is Pi: 1, and the PI is the circumference rate ), the more random points are obtained (even if the first 4 digits match the circumference rate), the closer the result is to the circumference rate.

Ii. 1947 simple form Method

[2, 1947: George dantzig, at the RAND Corporation, creates the simplex method for Linear Programming.]

In 1947, grorge dantzig of randcompany invented the simple form method. Since then, the simple form method has become an important cornerstone of linear planning. The so-called linear planning, simply put, is to give a set of linear (all variables are a power) constraints (such as A1 * X1 + B1 * X2 + C1 * X3> 0 ), returns the extreme value of a given target function.

This may seem too abstract, but examples that can be used in reality are not uncommon-for example, for a company, the human and material resources that can be put into production are limited ("Linear Constraints"), and the company's goal is to maximize profits ("the maximum value of the objective function"). See, linear planning is not abstract!

As part of operational research, linear planning has become an important tool in the field of management science.

Dantzig's simple form method is an extremely effective method for solving problems similar to linear programming.

3. 1950 Krylov subspace iteration method

[1950: Magnus hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for numerical analysis at the National Bureau of Standards, initiate the development of Krylov subspace iteration methods.]

1950: at the Institute of Numerical Analysis of the US National Bureau of Standards, the Lanczos of Manus hestenes, Eduard stifer, and Krylov were invented.

The Krylov sub-space iteration method is used to solve the equation, for example, Ax = B. A is an N * n matrix. When n is sufficiently large, it becomes very difficult to directly calculate it, the Krylov method cleverly converts it into an iterative form of kxi + 1 = kxi + B-Axi to solve the problem. Here, K (from the first letter of the Russian Nikolai Krylov surname) is a matrix constructed close to A. The magic of iterative algorithms is that, it simplifies complex problems into substeps that are easy to compute in stages.

Iv. Decomposition Method of matrix computing in 1951

[1951: Alston householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations.]

In 1951, Alston householder of the National Laboratory of Alstom Oak Ridge proposed the decomposition method of matrix computing. This algorithm proves that any matrix can be decomposed into a matrix of triangle, diagonal, orthogonal, and other special forms. The significance of this algorithm makes it possible to develop flexible matrix computing software packages.

V. 1957 optimized Fortran Compiler

[1957: John Backus leads a team at IBM in developing the Fortran Optimizing Compiler.]

1957: the IBM team led by John Bakos, created the Fortran optimization compiler. FORTRAN, also translated as Fu Chuan, is a combination of formula translation, meaning "formula translation ". It is the first advanced programming language in the world that has been officially adopted and circulated so far. This language has now evolved to Fortran 2008, and is well known.

Vi. QR Algorithm for calculating matrix feature values from 1959 to 61

[1959-61: J. g.f. Francis of Ferranti Ltd, London, finds a stable method for computingeigenvalues, known as the QR algorithm.]

1959-61: J. g.f. Francis of London felentty Co., Ltd. found a stable feature value calculation method, which is the famous QR algorithm.

This is also an algorithm related to linear algebra. If you have learned linear algebra, you should remember the "Matrix feature value". Calculating the feature value is one of the core contents of matrix computing, the traditional solution involves root of higher order equations, which is very difficult when the problem is large. The QR algorithm splits the matrix into an orthogonal matrix (you want to read this article and know what an orthogonal matrix is. : D .) Similar to the product of an upper Triangle Matrix and the Krylov method mentioned above, this is an iterative algorithm that simplifies the root problem of complex high-order equations into a step that is easy to compute, this makes it possible to use computers to solve large-scale matrix feature values.

The algorithm is written by J. g.f. Francis from London, England.

VII. 1962 fast Sorting Algorithm

[1962: Tony Hoare of Elliott brothers, Ltd., London, presents quicksort.]

1962: Tony Elliot Brothers Ltd, London, Hall proposed a fast sorting.

Haha, congratulations, you finally saw the first familiar algorithm ~.

As a classic algorithm in the sorting algorithm, the quick sorting algorithm is everywhere.

The quick sorting algorithm was first designed by Sir Tony Hoare. Its basic idea is to divide the columns to be sorted into two halves, and the left half is always "small ", the half on the right is always "big", and this process continues recursively until the entire sequence is ordered. Speaking of Sir Tony Hoare, the fast sorting algorithm is nothing more than his casual little discovery. His contributions to computers mainly include the theory of formal methods and the invention of algol60 programming language, he also won the 1980 Turing Award for these achievements.

For more information about the quick sorting algorithm and its application, please refer to an article I wrote, proficient in eight sorting algorithm series.

I. Quick Sorting Algorithm:

The average time complexity of fast sorting is only O (nlog (n), which is a historical innovation compared with ordinary selection sorting and Bubble sorting.

VIII. 1965 Fast Fourier Transformation

[1965: James Cooley of the IBM t. J. Watson research center and John Tukey of princetonuniversity and at&t Bell Laboratories unveil the fast Fourier transform.]

1965: James Cooley from IBM Watson Research Institute and John Tukey and at&t Bell Labs from Princeton University jointly launched the fast Fourier transform.

The Fast Fourier algorithm is a fast algorithm of Discrete Fourier algorithm (which is the cornerstone of Digital Signal Processing). Its time complexity is only O (nlog (n )); more important than time efficiency, the fast Fourier algorithm is very easy to implement by hardware, so it is widely used in the field of electronic technology.

9. Integer relationship detection algorithm 1977

[1977: helaman Ferguson and rosydney forcade of briham young university advance an integerrelation detection algorithm.]

1977: helaman Ferguson and roundun forcade of Birmingham University proposed the integer relationship of the forcade detection algorithm.

Integer relationship detection is an old problem. Its History can even be traced back to the Euclidean era. Specifically: Given-group real numbers x1, x2 ,..., XN, whether there are all zero integers A1, A2 ,... an, so that: A1 X 1 + A2 X2 +... + an xn = 0? Helaman Ferguson and rosydney forcade from brighamyoung University solved this problem this year. This algorithm is applied to "Simplifying the calculation of Feynman graphs in quantum field theory ".

10. 1987 fast multi-pole Algorithm

[2, 1987: Leslie greengard and Vladimir rokhlin of Yale University invent the fast multipolealgorithm.]

1987: Leslie's greengard and rokhlin from Yale University invented the fast multi-pole algorithm.

This fast multi-pole algorithm is used to calculate the exact motion of n particles interacting through gravity or power. For example, the interaction between stars in the galaxy or atoms in the protein ".

 

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.