What is an algorithm, algorithm complexity, representation, and classification?

Source: Internet
Author: User
Algorithm Definition

  An algorithm (algorithm) is an accurate and complete description of a solution and a series of clear instructions for solving problems.The algorithm represents a system method to describe the problem-solving policy mechanism. That is to say, the required output can be obtained for certain standard input within a limited period of time. If an algorithm has a defect or is not suitable for a problem, executing the algorithm will not solve the problem. Different algorithms may use different time, space, or efficiency to accomplish the same task. Advantages and disadvantages of an algorithm can be usedSpace complexityAndTime Complexity.

 

An algorithm should have the following seven important features:

 

Algorithms can be described using natural language, pseudocode, flowcharts, and other methods.

1. Finiteness)

Poor algorithms mean that the algorithms must be terminated after a limited number of steps are executed.

2. Definiteness)

Each step of an algorithm must have a definite definition;

3. Input)

An algorithm has 0 or more inputs to characterize the initial state of an operation object. The so-called 0 inputs indicate that the algorithm determines the initial conditions;

4. Output)

An algorithm has one or more outputs to reflect the results of input data processing. Algorithms without output are meaningless;

5. Feasibility)

Any computing step executed in an algorithm can be divided into basic executable operation steps, that is, each computing step can be completed within a limited time; (also known as validity)

 

  6. High Efficiency)

 

Fast execution speed and low resource usage;

 

  7. robustness)

 

The response to the data is correct.

 

Computer scientist nicklath-worth once published a famous book "Data Structure ten algorithms = programs", showing the position of algorithms in the computer scientific community and the computer application field.

Algorithm complexity

The same problem can be solved by different algorithms, and the quality of an algorithm will affect the efficiency of the algorithm and even the program. Algorithm analysis aims to select appropriate algorithms and improve algorithms. The evaluation of an algorithm is mainly fromTime ComplexityAndSpace complexity.

Time Complexity

The time complexity of an algorithm is the time required to execute the algorithm. Generally, a computer algorithm is a function f (n) of the problem scale N. Therefore, the time complexity of the algorithm is recorded.

 

T (n) = random (f (n ))

 

Therefore, the larger the problem scale N, the growth rate of Algorithm Execution time is positively related to the growth rate of F (n), which is called asymptotic time complexity ).

Space complexity

The space complexity of an algorithm refers to the memory space required by the algorithm. The computation and Representation Methods are similar to the time complexity. They are generally expressed by the approximation of the complexity. Compared with time complexity, the analysis of space complexity is much simpler.

The same problem can be solved by different algorithms, and the quality of an algorithm will affect the efficiency of the algorithm and even the program. Algorithm Analysis

 

The purpose is to select an appropriate algorithm and improve the algorithm. The evaluation of an algorithm mainly involves time complexity and space complexity.


1. Time Complexity

(1) Time Frequency

 

The time it takes to execute an algorithm cannot be calculated theoretically. You must run the test on the computer before you can understand it. However, it is impossible and unnecessary for us to test each algorithm on the machine. We only need to know which algorithm takes more time and which algorithm takes less time. In addition, the time spent by an algorithm is proportional to the number of statements executed in the algorithm. In an algorithm, when the number of statements executed is large, it takes more time. The number of statement executions in an algorithm is called the statement frequency or time frequency. As T (n ).

 

(2) time complexity

 

In the Time Frequency just mentioned, n is called the scale of the problem. When n is constantly changing, T (n) will also change. But sometimes we want to know what the rule is when it changes. Therefore, we introduce the concept of time complexity.

 

In general, the number of repeated executions of the basic operation in an algorithm is a function of the problem scale N. It is represented by T (N). If an auxiliary function f (n) exists ), so that when n approaches infinity, the limit value of T (N)/F (n) is a constant not equal to zero, then f (n) is T (N). It is recorded as T (n) = O (f (n), and O (f (N) is the progressive time complexity of the algorithm.

 

In different algorithms, if the number of statement executions in the algorithm is a constant, the time complexity is O (1). In addition, the time complexity may be the same when the time frequency is different, for example, T (n) = n ^ 2 + 3N + 4 and T (n) = 4n ^ 2 + 2n + 1 have different frequencies, but the time complexity is the same, all are O (N ^ 2 ).

 

Sort by order of magnitude, common time complexity:

 

Constant order O (1), logarithm order o (log2n) (base n logarithm, the same below), linear order O (N ),

 

Linear logarithm order o (nlog2n), square order O (N ^ 2), cubic order O (N ^ 3 ),...,

 

K to the power of O (N ^ K), exponential order o (2 ^ N ). As the problem scale N increases, the time complexity increases and the algorithm execution efficiency decreases.

 
2. spatial complexity

Similar to time complexity, spatial complexity refers to the measurement of the storage space required by an algorithm for execution in a computer. Note:

 

S (n) = O (f (n ))

 

 

Basic Methods for algorithm design and analysis 1. Progressive Method

A recursive algorithm is a method that describes complex problems by repeating simple operations (Rules) in several steps.

 

Recursion is a common algorithm in sequence computers. It calculates each item in the sequence according to certain rules. It usually obtains the value of the specified image in the sequence through some items in front of the computer. The idea is to convert a complex and massive computing process into multiple duplicates of a simple process. This algorithm utilizes the features of a computer that is fast and tireless.

2. Recursion

The Programming Technique of program calling itself is called recursion ). A process or function has a method that calls itself directly or indirectly in its definition or description, it usually converts a large and complex problem into a small problem similar to the original problem to solve it, the recursive strategy can describe the repeated computing required for the problem-solving process with only a small number of programs, greatly reducing the amount of code in the program. The ability to recursion lies in the use of limited statements to define an infinite set of objects. In general, recursion requires boundary conditions, recursive forward segments, and recursive return segments. If the boundary condition is not met, recursive advances. If the boundary condition is met, recursive returns. Note: (1) recursion is to call itself in a process or function; (2) when using a recursive policy, there must be a clear recursion end condition, called the recursion exit.

3. Exhaustion

  Exhaustion, Or calledBrute force crackingIt is a password-based deciphering method that calculates the passwords one by one until the real passwords are identified. For example, a password that is known to be a four-digit password and all composed of numbers may have a total of 10000 combinations, so you can find the correct password at most 10000 attempts. Theoretically, this method can be used to crack any password. The problem is only how to shorten the test time. Therefore, some people use computers to increase efficiency, and some use dictionaries to narrow down the password combination.

4. Greedy Algorithm

Greedy algorithms are a simpler and more rapid design technology for some optimal solutions. The greedy method is used to design an algorithm step by step. It is usually used to make the optimal choice based on the current situation based on an optimization measure, without considering the overall situation of various possibilities, it eliminates the need to spend a lot of time to find the optimal solution, and uses top-down methods to make greedy choices in an iterative way, every greedy choice is made, the problem is simplified to a smaller subproblem. Through greedy choice at each step, an optimal solution is obtained, although the local optimal solution must be obtained at each step, the global solution may not be optimal. Therefore, do not backtrack the greedy method. The greedy algorithm is an improved hierarchical processing method. Its core is to select a measurement standard based on the question. Then, the multiple inputs are arranged in the order required by this measurement standard, and a quantity is input at a time in this order. If this input cannot produce a feasible solution when combined with some of the best solutions currently in this measurement sense, this input is not added to this decomposition. This hierarchical processing method that can obtain the optimal solution in a certain measurement is called greedy algorithm. There are usually several metrics for a given problem. In the beginning, it seems that these measurements are desirable, but in fact, the optimal solution obtained by greedy processing with most of these measurements is not the optimal solution of the problem, but a sub-optimal solution. Therefore, it is the core of greedy algorithms to choose the optimal measurement criteria that can generate the optimal solution. Generally, it is not easy to select the optimal measurement standard, but it is especially effective to use greedy algorithms to solve a problem after selecting the optimal measurement standard. The optimal solution can be achieved through a series of local optimal options, that is, greedy options. The best choice is made based on the current status, that is, the local optimal solution, then, solve the corresponding sub-problems generated after the selection. Every greedy choice can simplify the problem to a smaller subproblem, and finally obtain an overall optimal solution.

5. Divide and conquer Law

Divide and conquer is to divide a complex problem into two or more identical or similar subproblems, and then divide the subproblems into smaller subproblems ...... Until the final sub-problem can be solved simply and directly, the solution of the original problem is the merge of the sub-problem solutions.

 

The problems solved by the Division and control law generally have the following characteristics:

 

(1) The problem can be easily solved if the scale of the problem is reduced to a certain extent (2) the problem can be divided into several small-sized identical problems, that is, the problem has the optimal substructure. (3) The solutions of subproblems resolved by the problem can be combined into the solutions of the problem. (4) The subproblems resolved by the problem are independent of each other, that is, subproblems do not include common subproblems.

6. Dynamic Programming

Dynamic Programming is a method used in mathematics and computer science to solve optimization problems that contain overlapping subproblems. The basic idea is to break down the original problem into similar subproblems and find the solution of the original problem through the subproblem solution in the process of solving the problem. The idea of dynamic planning is the basis of multiple algorithms and is widely used in computer science and engineering.

 

Dynamic Programming is a way and method to solve the optimization problem, rather than a special algorithm. Unlike the search or numeric calculations described above, there is a standard mathematical expression and a clear solution. Dynamic Programming is usually aimed at an optimization problem. Due to the different nature of various problems, the conditions for determining the optimal solution are also different. Therefore, the design method of dynamic planning has different questions, there are different solutions, but there is no universal dynamic planning algorithm, can solve all kinds of optimization problems. Therefore, in addition to a correct understanding of the basic concepts and methods, readers must analyze and handle specific problems, build models with rich imagination, and use creative techniques to solve them.

7. Iterative Method

Iteration method is also called the tossing method. It is a process of constantly using the old value of the variable to recursive the new value. Compared with the iteration method, it is a direct method (or a solution), that is, a one-time solution. Iterations are classified into exact iterations and approximate iterations. The "bipartite" and "Newton Iteration" are approximate iterations. Iterative algorithms are a basic method for solving problems with computers. It allows computers to execute a group of commands (or a certain step) repeatedly and execute these commands (or these steps) at a time by taking advantage of the high computing speed and the ability to perform repetitive operations) the original value of the variable.

8. Branch boundary method

The branch boundary method is a widely used algorithm. It is very skillful to use this algorithm, and different types of problem solutions are also different. The basic idea of the branch-and-boundary method is to search all feasible solutions (a limited number) spaces for constrained optimization problems. This algorithm continuously splits all feasible solution spaces into smaller subsets (called branches) during specific execution ), calculate a lower or upper bound (called a boundary) for the value of the solution in each subset ). After each branch, no further branches will be made for the subsets whose boundaries exceed the known feasible solution values. In this way, many subsets of the solution (that is, many nodes in the Search Tree) can be ignored, thus narrowing the search range. This process continues until a feasible solution is found. The value of this feasible solution is not greater than that of any subset. Therefore, this algorithm can obtain the optimal solution. Like greedy algorithms, this method is also used to design an algorithm for solving a combined optimization problem. What's different is that it searches for the entire possible solution space of the problem, although the time complexity of the designed algorithm is higher than that of the greedy algorithm, its advantage is similar to that of the exhaustive method, which ensures the optimal solution to the problem. Moreover, this method is not a blind exhaustive search, in the search process, through the limit, you can stop further searching for certain subspaces that cannot obtain the optimal solution (similar to pruning in artificial intelligence), so it is more efficient than the exhaustive method.


Algorithm Classification

Algorithms can be roughly dividedBasic Algorithms,Data Structure Algorithm,Number Theory and algebra algorithms,Geometric calculation algorithm,Graph Theory Algorithms,Dynamic PlanningAndNumerical Analysis,Encryption Algorithm,Sorting Algorithm,Search Algorithm,Randomization algorithm,Parallel Algorithms.

 

The algorithm can be classified into three categories in a macro format:

 

  Limited, deterministic AlgorithmsThese algorithms are terminated within a limited period of time. It may take a long time for them to execute the specified task, but it will still be terminated within a certain period of time. The results produced by these algorithms are often dependent on the input values.

 

  Limited, non-deterministic AlgorithmsSuch algorithms are terminated within a limited period of time. However, for a (or some) given value, the result of the algorithm is not unique or definite.

 

  Unlimited AlgorithmsIt is an algorithm that does not terminate the operation because no conditions are defined or the defined conditions cannot be met by the input data. Generally, the generation of an infinite algorithm is due to the failure to define termination conditions.

 
Example

There are many classic algorithms, such as: "Euclidean algorithm, circular cutting, and qinjiu algorithm ".

 
Classic algorithm Monograph

There are many books on algorithms on the market, the most famous of which is the art of computer programming and the introduction to algorithms ).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.