Why is algorithmic analysis needed?
- The resources required to predict the algorithm
- Calculation time (CPU consumption)
- Memory space (RAM consumption)
- Communication time (bandwidth consumption)
- Predict the run time of the algorithm
- The number of basic operations performed, or algorithmic complexity (algorithm complexity), given the size of the input
How to measure the complexity of an algorithm?
- Memory
- Times (Time)
- Quantity of instructions (number of Steps)
- Number of specific operations
- Number of disk accesses
- Number of network packets
- Progressive complexity (Asymptotic complexity)
What is the running time of the algorithm related?
- Depends on the input data. (for example, if the data is already sequenced, the time consumption may be reduced.) )
- Depends on the size of the input data. (Example: 6 and 6 * 109)
- Depends on the maximum run time. (Because the maximum run time is a commitment to the user.) )
Types of algorithm analysis
- Worst case scenario (worst cases): Maximum run time for any input scale. (usually)
- Average (Average case): Expected run time for any input scale. (sometimes)
- Best case: Usually best cases do not appear. (Bogus)
Cases
Searches for the specified value sequentially in a list of length n, then
worst case: n times comparison;
Average situation: N/2 times comparison;
Best case: 1 times comparison;
In practice, we generally only consider the worst-case operation of the algorithm, that is, for any input of size n, the maximum running time of the algorithm. The reasons for this are:
- The worst-case run time for an algorithm is an upper bound (Upper Bound) running time under any input.
- For some algorithms, the worst-case scenario occurs more frequently.
- Generally speaking, the average situation is usually as bad as the worst.
algorithm analysis to maintain bigger picture (Big idea), its basic thinking
- Ignore those constants that depend on the machine.
- Focus on the growth trend of uptime.
For example, the trend of t (n) = 73n3 + 29n3 + 8888 is equivalent to T (N) =θ (n3).
The
asymptotic notation (asymptotic Notation) usually has O, Θ, and Ω notation. The Θ notation progressively gives the upper and lower bounds of a function, using the O notation when there is only an asymptotic upper bound, and using the Ω notation when there is only the asymptotic lower bound. Although the technical Θ notation is more accurate, it is usually indicated by the O notation.
Use the O notation (Big O Notation) to indicate the upper bound of the worst-case operation. For example, linear complexity O (n) means that each element is processed once, and the square Complexity O (n^2) means that each element is processed n times.
For example:
T (n) = O (n3) is equivalent to T (N) ∈o (n3)
T (N) =θ (n3) is equivalent to T (N) ∈θ (n3).
Equivalent:
The asymptotic growth of T (n) is not as fast as N3.
The asymptotic growth of T (n) is as fast as N3.
- Note 1: Quick math memories, Loga (b) = y is actually a^y = B. So, log2^4 = 2, because 2^2 = 4. The same log2 (8) = 3, because 2^3 = 8. We say that log2 (n) is growing at a slower rate than n, because when n = 8 o'clock, log2 (n) = 3.
- Note 2: The logarithm of base 10 is usually called common logarithm. For the sake of simplicity, N common logarithm log10 (n) shorthand for LG N, for example log10 (5) do LG 5.
- Note 3: The logarithm of the base of the irrational number e is usually called the natural logarithm. For convenience, the natural logarithm of n is loge (n) shorthand for ln n, for example Loge (3) to do LN 3.
- Note 4: In the introduction to the algorithm, the notation lg n = log2 (n), which is the logarithm of the base 2. Changing the base of a logarithm only changes the logarithm value to a constant number of times, so when you don't care about these constant factors, we will often use the "LG N" notation, just like using O notation. Computer workers often consider the logarithm of the bottom 2 most natural, because many algorithms and data structures involve two points to the problem.
The usual proportional relationship between time complexity and run time
The following steps are used to calculate the progressive run time of a block of code
- Determine the constituent steps that determine the algorithm run time.
- Locate the code that performs the step, labeled 1.
- View the next line of code that is labeled 1. If the next line of code is a loop, Mark 1 is modified to 1 time times the number of cycles 1 * N. If more than one nested loop is included, the multiplier will continue to be calculated, for example, 1 * n * M.
- The maximum value to be found is the maximum value of the run time, which is the upper bound of the algorithm complexity description.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Analysis of complexity of basic algorithm of data structure (I.) Concept Chapter