1. Computing
The purpose of learning DSA is to achieveValidAndEfficientComputing, at the same time to achieve sufficient resources consumptionLow Cost.
Computing = Information Processing: some tools follow certain rulesClearWhileMachinery.
Computing Model = computer = information processing tool
Algorithm: a sequence of commands designed to solve specific problems under a specific computing model.
Algorithm elements:
Input |
Information to be processed (problem) |
Output |
Processed information (answer) |
Correctness |
It can indeed solve the specified Problem |
Certainty |
Any algorithm can be described as a sequence composed of basic operations. |
Feasibility |
Each basic operation can be implemented and completed within a constant time. |
Poor |
For any input, the output can be obtained after a few basic operations. |
...... |
...... |
Elements of a good algorithm:
Correct |
Compliant with syntax, able to compile and run |
Robust |
Identifies illegal input and performs proper processing without unusual exit |
Readable |
Structured + accurate name + comment |
Efficiency |
As fast as possible, with as little storage space as possible |
2. Computing Model
Two main aspects of algorithm analysis:
1. correctness: whether the Algorithm functions are consistent with the problem requirements.
2. Cost: running time + storage time required.
How to measure costs?
1. Apply specific algorithms to different instances:
Problem: different instances of the same issue and other scales,Computing costThere are different and even substantial differences.
For example, in the N points on the plane, find the three points with the smallest area of the triangle. Taking the brute force algorithm as an example, in the worst case, we need to enumerate C (n, 3) cases. In good luck, we only need one time.
Result: for the sake of security, only pay attentionWorst(The highest cost.
2. Different algorithms are used for specific issues:
Problem: lab statistics are the most direct method, but they cannot accurately reflect the true efficiency of algorithms.
For example, different algorithms may be more suitable for differentScale.
Different algorithms may be more suitable for differentType.
The same algorithm may be composed of differentProgrammerAnd use differentProgramming LanguageDifferentCompiler.
The same algorithm may be implemented and run in differentArchitecture,Operating System......
Result:ObjectiveMust AbstractIdealPlatform or model. It no longer depends on the specific factors mentioned above to directly and accurately describe, measure, and evaluate algorithms.
TM (turingmachine) and RAM (random accessmachine)
The TM model and ram model are common computing tools.SimplifiedAndAbstractionSo that we canIndependentOn a specific platform to improve algorithm efficiencyTrusted.
In these modelsRunning timeThe merge algorithm needs to be executedBasic operationsNumber of times, T (n) = the algorithm is used to solve the problem with a scale of N.Basic operationsTimes.
3. Big-O notation)
Progressive analysis: when the scale of the problem is large enough, the following curve can be obtained through how the computing cost increases:
But we are not concerned about this curve.Local,SubtleFor some temporary trends, but to look at itMain,Long Term.
To this end, you can use the so-called OSS to simplify the cost:
T (n) = O (f (N) iff? C> 0. When n> 2, T (n) <C * F (n) exists)
T (n) is moreConciseBut it still reflects the growth trend of the former.
There are other marks for progressive analysis, such as the big Ω mark:
T (n) = Ω (f (N) iff? C> 0. After N> 2, T (n)> C * F (n) exists)
Large shard mark:
T (n) = equals (f (N) iff? C1> c2> 0. After N> 2, C1 * F (n)> T (n)> C2 * F (n) exists)
The greater Ω mark indicates the progressive function.Lower Bound, Indicates the gradual progress of the function.Confirmation.
The two attributes of the Odo:
(1) for any constant C> 0, there are O (f (N) = O (C * F (n ))
(2) For any constant A> B> 0, there are O (Na + Nb) = O (NA)
The algorithm is measured by using a large OSN. The common Complexities include:
Complexity growth rate comparison:
4. Algorithm Analysis
Two main tasks = correctness (immutability x monotonicity) + complexity
C ++ and other advanced languagesBasic commands, Are equivalent toBasic commands. In the progressive sense, the two are roughly the same.
The main method for complexity analysis:
1. iteration: Sum of Series
2. recursion: recursive tracking + recursive equation
3. Prediction + Verification
Level complexity:
Arithmetic series |
Same level as last square |
Power Series |
Level 1 higher than power |
Geometric Series |
Same level as last |
Convergence level |
O (1) |
Geometric Distribution |
O (1) |
Harmonic Series |
Logging (logn) |
Logarithm level |
Logging (nlogn) |
5. iteration and Recursion
Governance by subtraction: To solve a large-scale problem, we can divide it into two subproblems: one of which is degradation.Ordinary, Another scaleScale down. Obtain the solution of the original problem by finding the sub-problem solutions respectively.
Divide and conquer: To solve a large-scale problem, you can divide it into several (usually two) subproblems with a general scale.Equivalent. Obtain the solution of the original problem by finding the sub-problem solutions respectively.
Note: When I write my notes for the first time, the first chapter is basically basic knowledge. I feel that many of them are directly moved to the teaching materials. I hope it will be better later. This course is a Data Structure Course of instructor Deng Junhui of Tsinghua University. The MOOC platform is online for the school. The course class starts on January 1. If you are interested, you can check it out. I think it is quite good.
[MOOC Notes] Chapter 1 Introduction (data structure)