# Analysis of the complexity of the turn algorithm

Source: Internet
Author: User

Transfer from http://www.cnblogs.com/gaochundong/p/complexity_of_algorithms.html

Why is algorithmic analysis needed?

• The resources required to predict the algorithm
• Calculation time (CPU consumption)
• Memory space (RAM consumption)
• Communication time (bandwidth consumption)
• Predict the run time of the algorithm
• The number of basic operations performed at the given input scale.
• or algorithmic complexity (algorithm complexity)

How to measure the complexity of an algorithm?

• Memory
• Times (Time)
• Quantity of instructions (number of Steps)
• Number of specific operations
• Number of disk accesses
• Number of network packets
• Progressive complexity (Asymptotic complexity)

What is the running time of the algorithm related?

• Depends on the input data. (for example, if the data is already sequenced, the time consumption may be reduced.) ）
• Depends on the size of the input data. (Example: 6 and 6 * 109)
• Depends on the maximum run time. (Because the maximum run time is a commitment to the user.) ）

Types of algorithmic Analysis:

• Worst case scenario (worst cases): Maximum run time for any input scale. (usually)
• Average (Average case): Expected run time for any input scale. (sometimes)
• Best case: Usually best cases do not appear. (Bogus)

For example, in order to search for a specified value in a list of length n, the

• Worst case scenario: N-Times comparison
• Average situation: N/2 times Comparison
• Best case: 1 Comparisons

In practice, we generally only consider the worst-case operation of the algorithm, that is, for any input of size n, the maximum running time of the algorithm. The reasons for this are:

1. The worst-case run time for an algorithm is an upper bound (Upper Bound) running time under any input.
2. For some algorithms, the worst-case scenario occurs more frequently.
3. Generally speaking, the average situation is usually as bad as the worst.

Algorithm analysis to maintain bigger picture (Big idea), its basic ideas:

1. Ignore those constants that depend on the machine.
2. Focus on the growth trend of uptime.

For example, the trend of t (n) = 73n3 + 29n3 + 8888 is equivalent to T (N) =θ (n3).

The asymptotic notation (asymptotic Notation) usually has O, Θ, and Ω notation. The Θ notation progressively gives the upper and lower bounds of a function, using the O notation when there is only an asymptotic upper bound, and using the Ω notation when there is only the asymptotic lower bound. Although the technical Θ notation is more accurate, it is usually indicated by the O notation.

Use the O notation (Big O Notation) to indicate the upper bound of the worst-case operation. For example

• The linear complexity O (n) indicates that each element is to be processed once.
• The square Complexity O (n2) indicates that each element is to be processed n times.
 Notation Intuition Informal Definition F is bounded above by G asymptotically Definitions:Number theory:F is not dominated by G asymptoticallyComplexity theory:F is bounded below by G asymptotically F is bounded both above and below by G asymptotically

For example:

• T (n) = O (n3) is equivalent to T (N) ∈o (n3)
• T (N) =θ (n3) is equivalent to T (N) ∈θ (n3).

Equivalent:

• The asymptotic growth of T (n) is not as fast as N3.
• The asymptotic growth of T (n) is as fast as N3.
 Complexity of Marker symbols Describe Constants (Constant) O (1) The number of operations is constant, regardless of the size of the data being entered.n = 1,000,000-1-2 operations Logarithm (logarithmic) O (log2 N) The ratio of the number of operations to the size n of the input data is log2 (n).n = 1,000,000-Operations Linear (Linear) O (N) The number of operations is proportional to the size n of the input data.n = Operations Square (quadratic) O (N2) The ratio of the number of operations to the size n of the input data is two squares.n = 250,000 operations Cubic (Cubic) O (N3) The ratio of the number of operations to the size n of the input data is three times.n = 8,000,000 Operations Index (exponential) O (2n)O (KN)O (n!) Exponential operation, rapid growth.n = 1048576 Operations

Note 1: Fast math memories, Logab = y is actually ay = B. So, Log24 = 2, because 22 = 4. Likewise log28 = 3, because 23 = 8. We say that log2n is growing at a slower rate than n, because when n = 8 o'clock, log2n = 3.

Note 2: The logarithm of base 10 is usually called common logarithm. For the sake of simplicity, n common logarithm log10 n abbreviations do LG N, for example LOG10 5 is made LG 5.

Note 3: The logarithm of the base of the irrational number e is usually called the natural logarithm. For convenience, the natural logarithm of n loge N shorthand does ln n, for example Loge 3 is made ln 3.

Note 4: In the introduction to the algorithm, the notation lg n = log2 N, which is the logarithm of the base 2. Changing the base of a logarithm only changes the logarithm value to a constant number of times, so when you don't care about these constant factors, we will often use the "LG N" notation, just like using O notation. Computer workers often consider the logarithm of the bottom 2 most natural, because many algorithms and data structures involve two points to the problem.

The usual proportional relationships between time complexity and run time are:

 Complexity of 10 20 50 100 1000 10000 100000 O (1) <1s <1s <1s <1s <1s <1s <1s O (log2 (n)) <1s <1s <1s <1s <1s <1s <1s O (N) <1s <1s <1s <1s <1s <1s <1s O (N*LOG2 (n)) <1s <1s <1s <1s <1s <1s <1s O (N2) <1s <1s <1s <1s <1s 2s 3-4 min O (N3) <1s <1s <1s <1s 001 5 hours 231 Days O (2n) <1s <1s 260 Days Hangs Hangs Hangs Hangs O (n!) <1s Hangs Hangs Hangs Hangs Hangs Hangs O (NN) 3-4 min Hangs Hangs Hangs Hangs Hangs Hangs

The method for calculating the progressive run time of a code block has the following steps:

1. Determine the constituent steps that determine the algorithm run time.
2. Locate the code that performs the step, labeled 1.
3. View the next line of code that is labeled 1. If the next line of code is a loop, Mark 1 is modified to 1 time times the number of cycles 1 * N. If more than one nested loop is included, the multiplier will continue to be calculated, for example, 1 * n * M.
4. The maximum value to be found is the maximum value of the run time, which is the upper bound of the algorithm complexity description.

Example code (1):

`1     decimal factorial (int n) 2     {3       if (n = = 0) 4         return 1;5       else6         return n * factorial (n-1); 7     }`

Factorial (factorial), given the size n, the number of basic steps performed by the algorithm is n, so the algorithm complexity is O (n).

Example code (2):

`1     int findmaxelement (int[] array) 2     {3       int max = array; 4 for       (int i = 0; i < array. Length; i++) 5       {6         if (Array[i] > Max) 7         {8           max = array[i]; 9         }10       }11       return max;12     }`

Here, n is the size of an array of arrays, the worst case is to compare n times to get the maximum value, so the algorithm complexity is O (n).

Example code (3):

`1     Long findinversions (int[] array) 2     {3       long inversions = 0;4 for       (int i = 0; i < array. Length; i++) 5 for         (int j = i + 1; j < Array. Length; J + +) 6           if (Array[i] > Array[j]) 7             inversions++;8       return inversions;9     }`

Here, n is the size of an array of arrays, the number of executions of the basic steps is approximately n (n-1)/2, so the algorithm complexity is O (n2).

Example code (4):

`1     long summn (int n, int m) 2     {3       long sum = 0;4 for       (int x = 0; x < n; + +) 5 for         (int y = 0; Y & Lt M y++) 6           sum + = x * y;7       return sum;8     }`

Given the size n and M, the number of executions of the basic step is n*m, so the algorithm complexity is O (n2).

Example code (5):

`1     decimal Sum3 (int n) 2     {3       decimal sum = 0;4 for       (int a = 0, a < n; a++) 5 for         (int b = 0; b < n; b++) 6 for           (int c = 0; c < n; c + +) 7             sum + = A * b * c;8       return sum;9     }`

Here, given the size n, the number of executions of the basic step is approximately n*n*n, so the algorithm complexity is O (n3).

Example code (6):

```1     decimal calculation (int n) 2     {3       decimal result = 0;4 for       (int i = 0; i < (1 << n); i++) 5
result + = i;6       return result;7     }```

Here, given the size n, the number of executions of the basic step is 2n, so the algorithm complexity is O (2n).

Example code (7):

Fibonacci Sequence:

• Fib (0) = 0
• Fib (1) = 1
• FIB (n) = fib (n-1) + fib (n-2)

F () = 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 ...

`1     int Fibonacci (int n) 2     {3       if (n <= 1) 4         return n;5       else6         return Fibonacci (n-1) + Fibonacci (n -2); 7     }`

Here, given the size of N, the time required to calculate the FIB (n) is the amount of time to calculate the FIB (n-1) and the time of the calculation of the FIB (n-2).

T (n<=1) = O (1)

T (n) = t (n-1) + t (n-2) + O (1)

`                     FIB (5)                    /             \                fib (4)                fib (3)            /      \                /          fib (3)      fib (2)         fib (2)    fib (1)    /     \        /    \       /    \  `

The complexity of the algorithm is O (2n) by using the structure description of recursive tree.

Example code (8):

`1     int Fibonacci (int n) 2     {3       if (n <= 1) 4         return n; 5       Else 6       {7         int[] f = new int[ n + 1]; 8         F = 0; 9         f = 1;10 one for         (int i = 2; I <= n; i++)         {[F[i           ] = f[i-1] + f[i-2];14< c34/>}15         return f[n];17       }18     }`

Also, we use the array F to store the results of the Fibonacci sequence, which optimizes the complexity of the algorithm to O (n).

Example code (9):

```1     int Fibonacci (int n) 2     {3       if (n <= 1) 4         return n; 5       Else 6       {7         int iter1 = 0; 8         int Iter2 = 1; 9         int f = 0;10 one for         (int i = 2; I <= n; i++)         {           f = iter1 + iter2;14 iter1           = iter2;15

iter2 = f;16         }17         return f;19       }20     }
```

Also, because only the first two calculations are actually useful, we can use intermediate variables to store them, so we don't have to create arrays to save space. The same algorithm is optimized for the complexity of O (n).

Example code (10):

The Fibonacci sequence algorithm is optimized by using the Matrix-exponentiation algorithm.

```1     static int Fibonacci (int n) 2     {3       if (n <= 1) 4         return n; 5  6       int[,] f = {{1, 1}, {1, 0} };  7       Power (f, n-1); 8  9       return f[0, 0];10     }11,     static void Power (int[,] f, int n)       (n <= 1)         return;16       int[,] m = {{1, 1}, {1, 0}};18       Power (f, N/2);       Multiply (F, f);       if (n% 2! = 0)         Multiply (F, m);     }25-     static void Multiply (int[,] f, int[,] m) (     28
int x = f[0, 0] * m[0, 0] + f[0, 1] * m[1, 0];29       int y = f[0, 0] * m[0, 1] + f[0, 1] * m[1, 1];30       int z = f[ 1, 0] * m[0, 0] + f[1, 1] * m[1, 0];31       int w = f[1, 0] * m[0, 1] + f[1, 1] * m[1, 1];32-       f[0, 0] = x;34       F [0, 1] = y;35       f[1, 0] = z;36       f[1, 1] = w;37     }```

The algorithm complexity is O (log2n) after optimization.

Example code (11):

The more concise code in C # is as follows.

`1     static double Fibonacci (int n) 2     {3       double sqrt5 = math.sqrt (5); 4       Double phi = (1 + sqrt5)/2.0;5       do UBLE fn = (Math.pow (phi, N)-Math.pow (1-phi, N))/sqrt5;6       return fn;7     }`

Example code (12):

The basic operation of inserting a sort is to insert a data into the ordered data that is already sorted, thus obtaining a new ordered data. The algorithm is suitable for ordering small amounts of data, and the time complexity is O (n2).

`1     private static void Insertionsortinplace (int[] unsorted) 2     {3 for       (int i = 1; i < unsorted. Length;  i++) 4       {5         if (Unsorted[i-1] > Unsorted[i]) 6         {7           int key = Unsorted[i]; 8           Int J = i; 9 while           (J > 0 && unsorted[j-1] > key)           {             Unsorted[j] = unsorted[j-1];12             j--;13           }14           UNSORTED[J] = key;15         }16       }17     }`

This article "Algorithm complexity analysis" by Dennis Gao published from the blog Park, any without the consent of the author of the crawler or artificial reprint are bullying.

Analysis of the complexity of the turn algorithm

Related Keywords:

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

## A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

• #### Sales Support

1 on 1 presale consultation

• #### After-Sales Support

24/7 Technical Support 6 Free Tickets per Quarter Faster Response

• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.