One, the complexity of time
In general, we do not need to know the exact size of T (n), but we only need to estimate its upper bounds. For example, if there is a normal number A, n, and a function f (n), so that for any n > N, there is
T (n) < AXF (n)
We can assume that after N is large enough, f (n) gives an upper bound of T (N). In this case, we remember it as
T (n) = O (f (n))
Here the O is called the " Big O mark (big-o notation)".
If there is a normal number A, n, and a function g (n), so that for any n > N, there is
T (n) > AxG (n)
We can assume that after N is large enough, g (n) gives a lower bound of t (N). In this case, we remember it as
T (n) =ω (g (n))
The ω here is called the " Big Omega Mark (big-ωnotation)".
Second, space complexity
Another important aspect of measuring the performance of an algorithm is the amount of storage space that the algorithm needs to use, that is, the complexity of the algorithm space. Obviously, for the same input size, in the same time complexity, we want the algorithm to occupy less space as well. However, in general, we will analyze and discuss the time complexity of the algorithm more, and even focus on the complexity of the time. The ability to do this is based on the fact that:
In terms of the meaning of progressive complexity, in any one run of an algorithm, it actually consumes more storage space than the number of basic operations performed during it.
O (1): Time complexity with constants
Algorithm: Nonextremeelement (s[], N)
Input: A set consisting of n integers s
Output: Any of these non-extreme elements
{
Three elements of any taken x, Y, z∈s; Since S is a collection, these three elements must be different from each other
By comparison, find the smallest of them min{x, Y, z} and Max Max{x, y, z};
The element with the smallest and largest output;
}
Since S is a finite set, the largest and smallest elements have each and only one of them. Therefore, no matter how large the size of S, in the first three elements s[0], s[1] and s[2], there must be at least one non-extreme element. So we can take x =s[0], y = s[1], and z = s[2], which only takes three basic operations and consumes O (3) time. Next, in order to determine the size order of the three elements, we need to do a maximum of three comparisons (please give the reader their own proof), and also O (3) time. Finally, the output is centered on the element with only O (1) time. Together, the algorithm one. 4 Run time is:
T (n) = O (3) + O (3) + O (1) = O (7) = O (1)
O (LOGN): Time complexity with logarithms
Algorithm: Baseconversion (n)
Input: decimal integer n
Output: The three-binary representation of n
{
Loop continuously until N = 0 {
Output n MoD 3; Take the mold
make n = N/3; Divisible
}
}
Within each round of the cycle, only two basic operations (modulo, divisible) are required. To determine the number of circular wheels to be carried out, we can note the fact that N is reduced to at least 1/3 per cycle. As a result, up to 0 can be reduced to 1+?log3n cycles. Therefore, the algorithm needs to run O (2x (1+?log3n)) = O (log3n) time.
Given the nature of the Big O notation, we usually ignore the constant base of the logarithmic function. For example, the base here is a constant of 3, so the above complexity is usually recorded as O (Logn)
O (n): Linear time complexity
Algorithm: Sum (a[], N)
Input: An array of n integers a[]
Output: Sum of all elements in a[]
{
make s = 0;
For each of the A[i],i = 0, 1, ..., n-1
Make s = s + a[i];
Output S;
}
The initialization of S requires O (1) time. The main part of the algorithm is a loop, which only has to be accumulated once in each cycle, which is the basic operation and can be completed in O (1) time. After each cycle, we accumulate an element, so we need to do a total of n round cycles. Therefore, the run time of the algorithm one. 6 is
O (1) + O (1) xn = O (n+1) = O (n)
O (n^2): With squared time complexity
Algorithm: Bubblesort (s[], N)
Input: n elements of a sequence s[], each element is determined by the subscript between [0..n-1], the elements can be compared between the size
Output: Re-adjust the order of elements in s[] so that they are arranged in a non-descending order
{
Starting from s[0] and s[1], check each pair of adjacent elements in turn;
Swap their positions as long as they are in reverse position;
Perform the above operation repeatedly until the order of each pair of adjacent elements is in accordance with the requirements;
}
Complexity of the algorithm