Computation of time complexity and spatial complexity of behind me----

Source: Internet
Author: User

algorithm: An algorithm is a description of the solution steps for a particular problem, represented as a finite sequence of instructions in a computer, and each instruction represents one or more operations.

The five properties of the algorithm:

    • Input and output: The algorithm has 0 or more inputs, and the algorithm has at least one or more outputs. The output can be in the form of printing or returning one or more values.
    • Poor: means that the algorithm automatically ends without an infinite loop after performing a finite number of steps, and each step is completed within an acceptable time.
    • Certainty: Each step of the algorithm has a definite meaning, does not appear two semantics, the algorithm under certain conditions, there is only one execution path, the same input can only have a unique output results. Each step of the algorithm is precisely defined without ambiguity.
    • Feasibility: Each step of the algorithm must be feasible, that is, each step can be accomplished by performing a limited number of times. The feasibility of the algorithm can be converted into a program to run, and get the correct results

The design of the algorithm requires: 

    • Correctness: The correctness of the algorithm is that the algorithm should have at least input, output and processing without ambiguity, can correctly reflect the needs of the problem, can get the correct answer. The correctness of the algorithm from low to High has 4 meanings: (1) The algorithm program has no syntax error, (2) the algorithm program for the legitimate input data can produce the output results satisfying the requirements, (3) the algorithm program for the illegal input data can be obtained to meet the specifications of the results; (4) The algorithm program for the meditation selection, Even difficult test data have the output to meet the requirements. A good algorithm design usually satisfies at least the third tier.
    • Readability: Another purpose of algorithm design is to facilitate reading, understanding and communication.
    • Robustness: When the input data is not valid, the algorithm can also do the relevant processing, rather than produce inexplicable results
    • Time efficiency and low storage: Design algorithms should try to meet the requirement of high time efficiency and low storage capacity.

Measurement of algorithm efficiency

1. Post- mortem methods: This method is mainly through the design of good test procedures and data, the use of computer timers to different algorithms programmed by the running time comparison, so as to determine the efficiency of the algorithm.

2. Pre-Analysis and estimation method: Before the computer program is compiled, the algorithm is estimated according to the statistical method.

A program written in a high-level language that takes time to run on a computer depends on the following factors:

(1) The strategy and method adopted by the algorithm;

(2) Code quality generated by compiling;

(3) The input scale of the problem;

(4) Speed of machine execution instructions

The most reliable way to measure uptime is to calculate the number of executions of basic operations that are consumed by running time, which is proportional to this count. The most important thing to consider when running a distracted program is to think of the program as an algorithm or a sequence of steps that is independent of the programming language.

int sum =0; int n =+;  for (int0; I <n;++i) {    + = i;}

Like the above code, its input size is n, then the total amount of time consumed (the number of basic operations that consume time, the time consumption of a basic operation, assuming it is O (1), we write it as f (n), ignoring the time consumption of the loop head. Assuming that the sum of this basic operation time is consumed at 1, then in the above algorithm, the summation process is repeated n times, then f (n) = n;

When analyzing the running time of an algorithm, it is important to associate the number of basic operations with the input scale, that is, the number of basic operations must be represented as a function of the input scale.

Progressive growth of functions

the gradual growth of the function: given two functions f (n) and g (n), if there is an integer n, so that for all n>n,f (n) is always greater than g (n), then we say that f (n) growth is faster than g (n).

When judging the efficiency of an algorithm, constants and other minor items in a function can often be ignored, and more attention should be paid to the order of the main item (the highest order). If you can compare the incremental growth of the key execution times function of several algorithms, it is essential to analyze: an algorithm that increases with n (input scale), it becomes more superior to another algorithm, or worse than another. This is the theoretical basis of the prior estimation method, and the time efficiency of the algorithm is estimated by the complexity of time algorithm.

Time complexity of the algorithm

algorithm time complexity: when the algorithm is analyzed, the total number of executions of the statement T (N) is a function of the problem size (input scale) n, which then analyzes the change of T (n) with N and determines the order of magnitude of T (N). The time complexity of the algorithm, which is the time measurement of the algorithm, is recorded as t (n) = O (f (n)). It indicates that with the increase of the problem size n, the growth rate of the algorithm execution time is the same as the growth rate of f (n), which is called the progressive time complexity of the algorithm, short of the time complexity. where f (n) is a function of the problem size n.

In this way, the notation of time complexity is represented by the capital O (), which we call the Big O notation.

In general, the algorithm with the slowest growth of T (n) is the optimal algorithm as n increases. O (1) is commonly known as the constant Order, O (n) commonly known as linear Order, O (n^2) known as the square order.

Derivation of the large O-order method

1. Replace all addition constants in run time with constant 1

2. In the modified run Count function, only the highest order is preserved.

3. If the highest order exists and is not 1, the constant multiplied by the item is removed and the result is the large O-order.

Note: The time complexity of the sequential structure is O (1), of course, if these are basic operations, such as assignment, comparison, etc.

The temporal complexity of the simple branching structure (not included in the loop structure) is also O (1), and the premise is that the operations of the branch are basic operations.

The time complexity of the loop structure is not necessarily, the simple circular structure to the above example of summation, its time complexity is O (n).

To determine the order of an algorithm, we often need to determine how many times a particular set of statements is run. Therefore, we need to analyze the complexity of the algorithm, the key is to analyze the operation of the loop structure.

Logarithmic order

 int  count = 1   while  (Count < N) {count  = Count*2  ; ...     //  } So what is the time complexity of this algorithm? Obviously the maximum input size for this algorithm is N,count is the input size of each cycle, so how many times does the basic operation statement execute when count increments by more than n? It was observed that count each cycle was twice times the last time. Assuming that the number of executions is x and the input scale is variable n, you can conclude that the relationship between X and N is:  2  ^x = n; Computational algorithm time complexity, we require the total number of executions of the input problem size function, therefore, the above equation is changed: x  = log2 (    n) F (x)  = log2 (n);    Next, the large O notation is deduced.    The time complexity of the above algorithm is O (log (n)). Here generally will omit the log2n in 2, this is because there is a change of the existence of the formula, so that the number of specific how much does not need to care. 

Square Order

Square order is generally seen in simple nested for loops

For example:

 for (int0; i < n: + +i)    {for (int j =0; j < n:++j)    {         /* sequence of procedure steps with a time complexity of O (1) */     }}
For each execution of the outer for loop, the inner for Loop executes n, the input size of the problem is n, then the total number of executions is n^2, i.e. f (n) = n^2
According to the notation of the large O-order, the time complexity of this algorithm is O (n^2)
If the outer loop variable is changed to M. Then the outer input size is m, the input size of the inner layer is n, each execution of the external layer, the inner loop must execute n this, so the total number of times is n*m
So its time complexity is O (m*n)
Therefore, the time complexity of the loop is equal to the complexity of the loop body multiplied by the number of times the loop runs.
 for(inti =0; i<n; ++i) {     for(j = i; j< n; + +)j) {/*sequence of program steps with an O (1) Algorithm for time complexity*/}} to calculate the time complexity of this algorithm, when I=0, the inner loop executes n times, and when i= 1 o'clock, the inner loop executes the n-1 times, and when i = 2 o'clock the inner loop executes n-2 times ... When I equals n1 times, the inner loop executes 1 times, then the total number of executions is f (n)= n+ (n1) + (n2) + (n3) +... +1= N (n+1)/2= n^2/2+ n/2then its time complexity is O (n^2)

Time complexity, the difficulty lies in the sequence of the relevant operations, so computational time complexity, to strengthen the basic mathematical ability, especially the knowledge of the sequence.

Time complexity calculation that contains method calls

  Simply put, you expand the function call as a macro definition and then follow the example above. can also be calculated using the addition or multiplication rules of the big O, in fact all the same

The addition rule of large O: F1 (N) + f2 (n) = O (max (F1 (n), f2 (n)), F1, F2 are two functions for the number of executions of the scale and basic operations, and the addition is applied to the order structure, if structure, switch structure

The multiplication rule of big O: F1 (N) *f2 (n) = O (f1 (N) * F2 (n)) is suitable for nested loops such as the nesting for of the above example, the time complexity of the outer layer is O (n), that is, the number of operations and the function of the scale is f1 (n) = n, and the time complexity of the inner loop The function of the number of operations and the size of the input is F2 (n) = n, then the time complexity of the entire nested for loop is O (n*n).

The time complexity of simple boolean calculations or arithmetic operations and simple I/O (memory I/O) is O (1), which is also the unit time of the large O notation. As for O (1) How long, the definition of this measure is meaningless, because, the computational time complexity, we just hope that from a theoretical point of view, roughly estimating the time it may consume, the specific running time of the program is also dependent on the specific hardware platform processing efficiency.

Worst case and average situation

The worst-case run time is a guarantee that the run time will not be broken again. This is one of the most important requirements in the application.

The average run time is the expected run time of the algorithm.

For the analysis of the algorithm, one method is to calculate the average of all cases, and the computational method of time complexity is called the average time complexity. Another approach is to calculate the worst-case time complexity. This method is called the worst average time complexity,

The spatial complexity of the algorithm

the spatial complexity of the algorithm: by calculating the storage space required by the algorithm, the computational formula of the algorithm space complexity is as follows: S (n) = O (f (n)), where n is the scale of the problem, and F (n) is the function of the statement about the storage space occupied by N. In general, when a program executes on a machine, it needs to store the storage unit for the data operation in addition to the instructions, constants, variables, and inputs that the program itself needs to be stored. If the input data occupies space only depends on the problem itself, and the algorithm independent, so only need to analyze the algorithm in the implementation of the necessary auxiliary units. If the auxiliary space required by the algorithm is a constant relative to the amount of input data, the algorithm is said to work in situ and the space complexity is O (1). In general, use "time complexity" to refer to the need for run time, using "spatial complexity" to refer to space requirements. Generally speaking of complexity, in fact, refers to the "time complexity."

Behind me----time complexity and spatial complexity calculation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.