Analysis of time complexity in algorithm interview

Source: Internet
Author: User
Tags pow sorts

Example: There is a string array that first sorts each string in the array alphabetically, and then sorts the entire string in dictionary order. Time complexity of the entire operation?

A: Suppose the longest string length is s, and there are n strings in the array.
Sort each string: slogs, total N, so nslog (s)
Sort all strings: O (S*nlog (n))//Sort the string, with a maximum of s per comparison

==> O (n * slogs) + O (S * nlogn) = O (sn(Logn + logs))

Algorithmic complexity in some cases is related to use cases

There is a concept of data size--back cover estimation

Use one of the following programs to test:

for(int19; x++){    int n = pow(10, x);        clock_t startTime = clock();    int0;    for(int0; i < n; i++)        sum += i;    clock_t  endTime = clock();        "10^"" : "double" s"<< endl;}

This is an O (n) algorithm in the native 4-Core i7 machine running out of the results as follows:

10^1 : 0 s10^2 : 0 s10^3 : 0 s10^4 : 0 s10^5 : 0 s10^6 : 0 s10^7 : 0.03125 s10^8 : 0.25 s10^9 : 2.4375 s

In other words, if the program requires the results in 1s, the data is best not to exceed 10^8, do not reach 10^9, that is to say:
O (n^2): Approximately 10^4 levels of data can be processed
O (N^LOGN): Approximately 10^7 level data can be processed
O (n): Approximately 10^8 level data can be processed

Common complexity analysis\ (O (1) \)
void swap(intint& b){    int tmp = a; a = b; b = tmp;}
\ (O (n) \)The---constant coefficient is probably not 1.
int sum(int n) {    int0;    for(int1; i <= n; i++) {        sum += n;    }    return sum;}
\ (O (n^{2}) \)
// 选择排序for(int0; i < n; i++){    int minIndex = i;    for(int1; j < n; j++){        if(arr[j] < arr[minIndex]) {            minIndex = j;        }    }}swap(arr[i], arr[minIndex]);
\ (O (LOGN) \)
// lower_boundint binSearch(const vector<intintintint key) {    while(lo < hi) {        int2;        if(nums[mid] < key) {            lo = mid;        }        else{            hi = mid;        }    }     return lo;}

Consider this example: N After several times divided by 10, equals 0? The answer is log_{10}n.

// Warnning!! this code is buggy// 需要考虑各种情况string intToString (int num) {    "";    while(num) {        ‘0‘10;        10    }    reverse(s);    return s;}

The logarithm of base 2 and base 10 is no different in order of magnitude. (Just a linear relationship)

\ (O (NLOGN) \)

Although the following code is also a two-dimensional loop, the complexity is \ (O (NLOGN) \) because the outer loop is exponentially increased (multiplied by 2 each time)

void hello(int n) {    for(int1; sz < n; sz += sz) {        for(int1; i < n; i++){            "Hello" << endl;        }    }}
The complexity of the experiment

I clearly write is \ (o (nlogn) \) algorithm, the interviewer said I Am (O (n^{2}) \) ?

You can validate yourself to see what level of data you can handle and refer to the back-cover estimate.
Experiment and observe the trend. For example, each time you increase the data size by twice times, observe time changes.

The complexity analysis of recursive algorithm

For a single recursive call, the complexity is usually \ (O (t*depth) \), and the recursive expression is deduced.

As in the following code:

double pow(doubleint n) {    0);    if0return1;        double2);    if(n %2)         return x*t*t;    else        return t*t;}

Recursion depth \ (depth = Logn, T = 1\), so the complexity is \ (O (LOGN) \).

int f() {    //递归基 here    return11);}

Multiple recursive calls, recursive depth \ (depth = n, each operation 2\), can be pushed to export the algorithm complexity of the number of points (O (2^n) \)
\[\begin{equation}\begin{split}\\f (n) &= 2f (n-1) \\&= 4 (n-2) \\&= 8f (n-3) \\&\cdots\\&= 2^{n}f (1) \ \&= O (2^{n}) \\\end{split}\end{equation} \tag{1}\]

Simple and non-rigorous analysis of the complexity of fast sorting:

The quick sort of each partition operation can construct a position where all values on the left are smaller than the pivot point, and the right side is larger than the pivot point, so \ (f (n) = 2*f (N/2) + f (partition) \) and the partition operation only needs to do a loop, so it's an O (n), so
\[\begin{split}\\f (n) &= 2f (N/2) + O (n) \\&= 4f (N/4) + O (n) + 2*o (N/2) \\&= 8f (N/8) + O (n) + (N/2) + (N/4) \ &\cdots\\&= 2^{\log_{2}n} * F (1) + \underbrace{O (n) + O (n) +\cdots+ o (n)}_{k = \log_{2}n}\\&= n + O (n*\log_ {2}n) \\&= O (n * \log_{2}n) \\\end{split} \tag{2}\]
Of course, rigorous analysis also requires the introduction of probability (random distribution). Here is just a simple and not rigorous deduction.

Averaging complexity analysis amoritzed time

Typical example: Dynamic array (vector)
Each dynamic capacity expansion (resize ()) requires a new space and one by one assignment, so the complexity of an operation is \ (O (n) \) then the question is: what is the average complexity of the vector push_back?

Assuming that the current array is n, from empty to full, the consumption per operation is \ (O (1)? \). If you have another element at this point, you need to resize, then the last operation is consumed by \ (O (n) \), then on average, the total cost of the past n+1 operations is
\[\underbrace{O (1) + O (1) +\cdots+ O (1)}_{n} + O (n) = O (2n) \]
So the apportionment looks at each push_back operation cost to \ (O (\frac{2n}{n+1}) = O (2) = O (1) \)

Then the question comes again, if pop_back the time, found that the size of the current capacity of 1/2 resize, then the complexity of how much?
Assuming that the current array capacity is 2n, from full to half, the consumption of each operation is \ (o (1) \), if at this time pop_back , the need to consume again (O (2*n) \), then this view of the averaging analysis is the level O (1)
But in a strange situation: After the array is full, resize is twice times, need O (n), at this time again to delete, then reached the critical point, at this time to resize, and need \ (O (n) \), then in this degenerate situation, a single operation complexity degraded to the \ (O (n) \), which is also known as the oscillation of complexity

What is the right thing to do? pop_backwait until the size of capacity is 1/4 and then resize.

Interview was asked the complexity analysis assumes that the current dynamic array/hash table has n elements, the initial size of the vector is 1, how many copy operations from the beginning to the present?

Last replication: N/2 participation

Second-to-last copy: N/4 participation

Third-to-last copy: N/8 participation

...

Second time: 2 participation

First time: a participant

\[\therefore S (n) = \frac{n}{2} + \frac{n}{4} + \frac{n}{8} +\cdots+4+2+1 \tag{3}\]

\[\rightarrow 2S (n) = n + \frac{n}{2} + \frac{n}{4} + \frac{n}{8} +\cdots+4+2 \tag{4}\]

It's easy to come up with

\[s (n) = n-1 \tag{*}\]

That is, each time the complexity of inserting a new element can be apportioned to the level of \ (O (1) \)

Analysis of time complexity in algorithm interview

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.