Analysis of the complexity of an algorithm

Source: Internet
Author: User
Algorithm Complexity Analysis

The first two paragraphs talk about the concept of an algorithm's complexity and the importance of analyzing it. This section describes how to analyze the complexity of an algorithm and give an operational rule. The algorithm must be implemented in a certain way.ProgramProgramming languages such as Pascal. Therefore, the analysis of the algorithm complexity and the increment can be replaced by the analysis of the complexity and the increment of the program that expresses the algorithm.

As mentioned above, for the complexity of algorithms, we only consider the worst, the best, and the average, and usually focus on the worst. For clarity, this section is intended for the worst case.

The time complexity is still used as an example. Here are eight rules for analyzing the time complexity. These eight rules cover the analysis of the time complexity of the algorithms expressed by Pascal in the worst case.

Before listing and interpreting this rule one by one, it should be pointed out that when we analyze a certain part of the program (such as a statement, a program, a program segment, a process or function) you can use the scale N input by a specific program as the independent variable of the complex function, or use a local scale parameter as the independent variable. However, the complex functions of the overall program as the final result can only take the input size of the whole program as the independent variable.

For a serial algorithm, the corresponding PASCAL program is a serial Pascal statement sequence. Therefore, it is obvious that the time complexity of the algorithm (that is, the time required) it is equal to the sum of the time complexity (that is, the required time) of each statement of the corresponding PASCAL program. Therefore, if the time required for executing each statement in Pascal has a metering rule, execute a program, that is, the time required to execute an algorithm is just an algebra problem. Then, we can analyze the time complexity of the algorithm by applying the algorithm rules such as begin, Ω, and θ provided in the third section.

Therefore, our time metering rules only need to address Pascal's limited basic operations and several basic statements. The following are a list of these rules and necessary descriptions.

Rule (1)

Assign value, compare, arithmetic operation, logical operation, read/write a single constant or a single variable, etc., only one unit of time is required.

Rules (2)

The Condition Statement "if C then S1 else S2" only requires the time of TC + max (ts1, ts2), where TC is the time required to calculate the conditional expression C, ts1 and ts2 are the time required to execute statements S1 and S2 respectively.

Rules (3)

Select the statement "Case A of A1: S1; A2: S2 ;... ; Am: Sm; end ", max (ts1, ts2 ,..., TSM), where tsii is the time required to execute the statement Si, I = l, 2 ,..., M.

Rule (4)

It takes only one unit of time to access a single component of an array or a single field of a record.

Rules (5)

The time required to execute a for loop statement is equal to the time required to execute the loop body multiplied by the number of times of the previous loop.

Rules (6)

Execute a while loop statement "while C do s" or a repeat loop statement "Repeat s until C ", the required time is equal to the sum of the time required for calculating the condition expression C and the time required for executing the loop s body multiplied by the number of cycles. Unlike rule 5, the number of cycles here is implicit.

For example, the while loop statement in the B _search function. By rule (1)-(4), calculate the conditional expression "(not found) and (u ≥ = L)" and the execution loop body

I: = (U + l) Div 2; if C = A [I] Then found: = trueelse if C> A [I] Then l: = I + 1 else u: = I-1;

Onlyθ(1) time, while the number of loops is logMTherefore, to execute this while statement, you only needθ(LogM) Time.

In many cases, rules (5) and (6) are often used to determine the number of cycles Based on the connotation of a specific algorithm, so that the time estimation is not too conservative. Here is an example.

Test Procedure Section:

Size: = m;

1

I: = 1;

1

While I <n do

 

Begin

 

I: = I + 1;

 

S1;

θ(N)

If size> 0 then

1

Begin

 

Assign a value to T for any number in the range of 1 to size;

θ(1)

Size: = size-T;

2

For J: = L to t do

 

S2

θ(N)

End;

 

End;

 

The program marks the time required to execute the corresponding row at the top right of each row. If you do not take a deeper look at the connotation of the algorithm, you can only see 1 ≤T≤ Size ≤MThe number of for loops in the while loop is estimatedBytes(M), Then, the time complexity of the program in the worst case will be estimatedBytes(N2 +M·N2 ). If you carefully analyze the connotation of an algorithm, the results will be different. In fact, in the while loop bodyTIt is dynamic, and the size is dynamic. They all depend on the while loop parameter I, that isT=T(I)TI; size = size (I) is recorded as sizei, I = l, 2 ,...,N-1. For each I, 1 ≤ I ≤N-1,TI andMIs implicit, which makes it difficult to accurately calculate the number of times the for loop body S2 is executed. The above estimation is conservative because we have made the statistics on the number of S2 executions too localized. If we do not limit the for loop, but count the total number of times S2 is executed on the entire program segment, the total number of times is equal, according to the method of Ti in the algorithm and sizei + 1 = sizei-ti, I = 1, 2 ,..., N-1 has sizen = size1 -. Finally, use size1 = m and sizen = 0 to get = m. So in the entire program segment, the total number of times S2 is executed is m, and the time required isθ(Mn). The time required to execute other statements is easy to calculate by using rules (l)-(6. In the worst case, the time complexity of the entire program segment isθ(N2 +Mn). This result is obviously more accurate than the rough estimation above.

Rules (7)

For the GOTO statement. In PASCAL, a goto statement is introduced to facilitate the expression of statements that jump from the middle of the loop body to the end of the loop body or to the end of the loop statement. If our program uses the GOTO statement according to the original intention, we can assume that it does not need any additional time in time complexity analysis. This will neither underestimate nor overestimate the program's running time in the worst case. If some programs abuse the GOTO statement, that is, the control is transferred to the previous statement, the situation will become complicated. When this transfer leads to a loop, as long as it does not cross with other cycles, keep the internal and external nesting of the loop, you can compare the rule (1)-(6) for analysis. When the program structure is disordered due to the use of the GOTO statement, it is recommended to rewrite the program and then perform analysis.

Rule (8)

For process calls and function call statements, the time required consists of two parts: one is used to implement control transfer, and the other is used to execute the process (or function) itself, at this time, you can use the rule (l)-(7) from the inside out based on the process (or function) Call level for analysis, layer by layer, it is expected that the running time of the outermost layer is calculated. If a process (or function) has a direct or indirect recursive call, the above-mentioned layer-by-layer analysis from the inside to the outside will not work. In this case, we can assume that the time required for each recursive process (or function) is a undetermined function of the corresponding scale. Then, based on the meaning of the process (or function), establish the recursive relationship between these undetermined functions to obtain the recursive equation. Finally, the progressive progression of complexity in the worst case is determined by the progressive approach to the solution of recursive equations.

There are many types of recursive equations, and there are also many methods for finding their solutions to their gradual order. We will give a more systematic introduction in the next section. This section describes how to create a recursive equation by taking a simple recursive process (or function) as an example, at the same time, the gradual order of their time complexity in the worst case is given without deduction.

Example: evaluate the function B _search again. Here, rewrite it into a recursive function. In order to be concise, we have used the preceding rule (l)-(6) to calculate the time required to execute each row of statements and mark the time on the right side of the corresponding row:

Function B _search (C, L, u: integer): integer;

Unit time

VaR index, element: integer;

 

Begin

 

If (U <L) then

1

B _search: = 0;

1

Else

 

Begin

 

Index: = (L + u) Div 2;

3

Element: = A [Index];

2

If element = C then

1

B _search: = index

1

Else if element> C then

 

B _search: = B _search (C, L, index-1)

3 +T(M/2)

Else

 

B _search: = B _search (C, index + 1, U );

3 +T(M/2)

End;

 

End;

 

WhereT(M) Is the scale of the problem.U-L+ 1 =MWhen B _search is in the worst case (in this case, the array [L..U].C. According to the rule (l)-(8), we have:

Or simplified

This is a recursive equation about T (m. The iteration method described in the next section is easy to understand:

T(M) = 11logM+ L3 =θ(LogM)

before the end of this section, let's take a look at the space complexity analysis of algorithms in the worst case. We can also give rules similar to the analysis time complexity. I will not go into details here. However, it should be pointed out that the extra overhead of the implied bucket should be taken into account when a procedure (or function) recursive call occurs. Because the existing implementation process (or function) recursive calling programming technology requires an implicit, extra (not included in the program description) stack to support. Recursive call of a process (or function) stores the local information of the current layer and the return address of the call at the top of the stack for backup until the innermost layer of the call. Therefore, the size of the extra storage space required to call a procedure (or function) recursively is proportional to the size of the stack and the depth of the recursive call. The proportional factor is equal to the amount of data to be saved for each layer. For example, in the first section of the recursive function B _search, in the worst case, the depth of recursive calling is log m , therefore, in the worst case, the extra storage space required to call it is θ (log m ).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.