ACM Course Summary
ACM Course Learning Road Since yesterday ended, should listen to their deleted elective course, or insist on down, contact ACM three months 15 days, in HDU do fifty or sixty questions, actually do how many topics is not the most important, the important thing is to learn what, first simple summary. Learned algorithm Summary: mastered the very familiar algorithm: simple dynamic planning: including the way and angle of thinking problems, can extract the dynamic programming from the simple problem of the state transfer equation, about the matrix internal operation of the DP problem is familiar with the common dynamic programming problems and classical problems also have a certain understanding. More simple greed: greed is more an idea than an algorithm. In each step of constructing the optimal solution, the local optimal solution is constructed, and the key to solve these problems is to find the optimal strategy and to analyze whether the optimal substructure can construct the global optimal solution. Generating function problems: think of a few seconds to write the template right away. Large number of high-precision problems, there is no technical content, mainly careful, patience. It is mainly in the specific problem to distinguish which problems are the large number of problems. There is also the simplification of the code. The prime algorithm to find the smallest generation tree, closed eyes can be written out, in fact, is the application of greedy ideas. The Sieve method used to calculate the prime number can simplify the code and use it skillfully. To play the table method. Euclidean algorithm. Call sort sort. The idea of combining mathematical Catalan numbers. More skilled algorithm: the shortest-circuit Dijkstra. The generalization of Euclidean algorithm is used to solve the indefinite equation. The Hungarian method for the maximum matching of binary graphs. Maximum independent set, minimum path coverage, etc. two-point matching variant problem. Topological ordering in graph theory of relational theory. Application of Binary tree: Decision tree. The idea of log log solves some problems in number theory. Some sort algorithm. Depth-first search idea. Back to the idea of searching. Two points and the thought of division and treatment. Tips for pruning in search. Method of recursion and thought. Some mathematical logic, simple graph theory and simple number theory in discrete mathematics are used to solve some problems. Some thinking methods of combinatorial Mathematics: The application of Fabonacci sequence problem, recursive method and permutation combination algorithm. The analysis of the simple Problem of game theory: Ba Shi game, nim game, Ba Shi game. A cross-experiment to determine the intersection of segments. Some understanding of the algorithm: the use of STL: I learned a little, but not much. The shortest-circuiting floyed algorithm. The kruscal algorithm for the minimum spanning tree. And look at the set of ideas. Segment tree solves RMP problem. These are the more than 100 days of knowledge learned, the following are the specific summary of the various modules.
A Greedy algorithm
One, the general idea of the greedy algorithm is:
1: Build a mathematical model to describe the problem
2: Divide the problem of solving into several sub-problems.
3: To solve each sub-problem, we get the local optimal solution of sub-problem.
4: A solution to the problem of solving the original solution by local optimal solution of the problem of the handle
Second, the greedy algorithm is generally used for the following problems:
Conversion to segment coverage, Backpack division, Activity time arrangement, number combination problem
But there are some problems with this algorithm:
There is no guarantee that the final interpretation of the obtained is the best; it cannot be used to solve the problem of maximum or minimum solution, only the range of feasible solution satisfying certain constraints can be obtained.
The greedy algorithm is usually sorted before starting the strategy, and then the optimal selection is made after sorting. Sort generally uses the sort function directly, before learning this class, I write sort or use the most basic version of bubbling or simple selection. But also learned the data structure, the sorting method is many. But generally I will use this sort, after all, simple. There is a very good container is the priority queue, this directly into the number can be automatically sorted, better use.
The algorithm of minimum spanning tree in the application of graph is a good greedy algorithm. In fact, the greedy algorithm can be combined with many other algorithms, better use, the results are more accurate.
Two Search algorithm
A search algorithm is a method that uses the high performance of a computer to have a purposeful, exhaustive problem of solving some or all of the possible situations of a space, thus finding the solution to the problem.
The 1:2-point method is binary to find the attention control accuracy and the three-point method is similar to two points
Two: Dfs Depth-First search:
Its basic idea is: in order to obtain the solution of the problem, first select a possible situation forward (sub-node) exploration, in the course of exploration, once found that the original choice does not meet the requirements, go back to the Father node re-select another node, continue to explore, so repeatedly, until the optimal solution is obtained. The implementation of depth-first search can be implemented by recursion or stack. The popular point is to prioritize search after search for the required condition or to all nodes are accessed. The front order traversal is almost
The problem of turning the usual problem into a tree is a crucial step, and completing the transformation of the tree basically completes the problem solving
Optimization idea: Reduce the total number of States traversed. The method has a reduced number of nodes, custom backtracking boundaries, and a memory search that allows some traversed subtrees to no longer iterate.
Three principles of depth-first search:
Correctness: The cut "Branch" does not contain the best answer;
Accuracy: In the case of ensuring the first principle, as far as possible to cut more than the best answer to the branches;
High efficiency: It is possible to reach the optimal solution faster by pruning.
Three: BFs Breadth First search:
The process is: first access to the initial point VI, and mark it as having been visited, and then access to all of the Vi1 of the unreachable adjacency points, Vi2 ... Vit, and are marked as visited, and then follow Vi1, Vi2 ... The order of Vit accesses all the inaccessible adjacency points of each vertex and is marked as visited, and so on, until all the vertices in the diagram that have a path to the initial point vi are accessed. The search takes precedence across a line of search until the search is needed or the node is accessed. For a large number of states, breadth-first search can be handled using a circular queue or a dynamic linked list. Similar to sequence traversal.
Four: Issues to be aware of
The first is to encounter operators when writing programs, should consider the problem:
(1) Once multiplication and addition (especially multiplication) is done, consider whether there is an integer overflow, whether to use a 32-bit integer or 64-bit or a large integer.
(2) Once in the operation of the modulo, do subtraction must consider whether there will be negative, negative modulus in different languages The result is not the same, so the best way is to add Special award.
(3) Once there is division of the operation, it is necessary to consider whether the divisor is 0, some words will be written special sentence.
Notes on the topic:
(1) Some topics are actually looking for a regular topic, not to be subject to complex conditions, and a large range of data (often also shows that these questions are regular) scare. You can start from a small scale sample, you can also solve the problem of removing a restrictive condition, and then solve the original problem through this idea.
(2) Pay attention to the transformation of ideas, not too old-fashioned, binary search is not just to find, but also for the optimal solution problem. Suppose the result is x (so that there is one more condition), and then two points to this x, so that the optimization problem is converted to determine the problem. (Some problems may be optimized without knowing how to do it, but turning into a judgment question is easy)
(3) Attention to grasp the essentials, search algorithm and dynamic programming algorithm, are in a variety of strategies to select the optimal solution, and greedy algorithm is different, it follows a certain rule, constantly select the current optimal strategy.
(4) To have a good solution to the data, reasonable organization (sorting, heap), is the most common way to optimize time, but also our design data structure is one of the most important factors, because of the need for multiple lookups, we have a dictionary to sort, so every time to check up a lot faster. If I only check it once, I don't need to sort the dictionary.
Three Dynamic planning
Dynamic programming is a method to solve multi-stage decision-making problems.
One, multi-stage decision-making issues:
If the solving process of a class of problems can be divided into several interrelated phases, it is necessary to make a decision at each stage and influence the decision of the next stage, so as to determine the active route of a process, it is called a multi-stage decision-making problem.
The multi-stage decision-making problem is to choose an optimal strategy among the strategies that can be selected, so as to achieve the best results under the predetermined standard.
Second, the principle of optimality
Regardless of the initial state and the first step, the remaining decisions form an optimal decision sequence relative to the new state created by the previous decision.
The sub-sequence of the optimal decision sequence must be the local optimal decision sub-sequence.
It is not the optimal decision sequence to include the decision sub-sequences with nonlocal optimality.
Third, the guiding ideology of dynamic planning
When making every step of the decision, a list of possible local solutions
According to a certain criterion, we discard the local solutions which are not sure to get the optimal solution.
Each step is optimal to ensure that the global is optimal.
Four, the dynamic programming problem has the following basic characteristics:
The problem has the characteristics of multi-stage decision.
Each stage has a corresponding "state" corresponding to it, describing the amount of state is called "state variable".
Each stage faces a decision, and choosing a different decision will lead to a different state in the next phase.
The optimal solution problem of each stage can be attributed recursively to the optimal solution of each possible state in the next stage, and the sub-problems have identical structure with the original problem.
Five, the general problem solving steps of dynamic programming:
1: Determine whether the problem has the best sub-structural properties, if not available, dynamic planning can not be used.
2: Divide the problem into several sub-problems (phased).
3: Establish the state transition equation (recursive formula).
4: Find out the boundary conditions.
5: Bring the known boundary value into the equation.
6: Recursive solver.
The common problems are
0-1 knapsack problem, maximum sub-segment and, longest monotone subsequence, number triangle, etc.
Six, we must pay attention to the following problems in the process of writing dynamic programming:
1: The processing of the boundary ~ array as wide as possible, to prevent careless crossing.
2: Initialize, do not underestimate the initialization, leakage will be fatal.
3: Pay attention to the code when writing the variable accuracy, is I or J, is K or k-1, must see clearly, write clearly, more back to check, to prevent the low-level mistakes should not appear.
4: The definition of state in dynamic programming is usually defined in many ways, and the answer to the question can be solved. However, if the definition of state is different, its time and space efficiency also tend to have a big difference. In solving the problem of dynamic programming, we must try to think of several state-defined methods, and then take one of the most efficient methods to write the program.
5: Dynamic planning has a lot of time can not be directly used to solve, in large-scale competitions often need to do some necessary processing transformation (such as sequencing, shortest path processing) operation in order to apply dynamic programming to solve the problem of the optimal solution.
6: Part of the problem of the idea of a more ingenious, need to carry out some speculation and proof to make it simple, smooth solution out. Feel this is the hardest place to move, but it's also the most frequently tested.
Four Graph-related algorithms
Graph theory (graph Theory) is a branch of mathematics. It takes the graph as the research object. A graph in graph theory is a graph of a given point and a line connected to two points, which is usually used to describe a particular relationship between something, a point representing something, and a line connecting two points to indicate that the corresponding two things have this relationship.
Graph theory, is the ACM program design This course of the last topic, I think is also the most difficult of a topic, the above mentioned algorithm, is only a small part of the algorithm in graph theory, the comprehensive comparison of graph theory is strong, for example, some of the graph topics will even use greedy or search ideas, so encountered specific problems should be specific treatment.
Another feature of the diagram is that the template is very strong, for the common algorithm, as long as it is written into a function, encountered similar problems, as long as the ready-made functions are copied in, and modify the Mian () function and input and output format, you can correctly draw the conclusion.
The main thing here is to learn about the set and minimum spanning tree and shortest path, minimum spanning tree including prim algorithm and Kruskal algorithm, the most common Dijkstra algorithm and Bellman-ford algorithm
and check Set
One, common two kinds of operation:
Merging two collections
Find which collection an element belongs to
Thought: Every time you look up, if the path is longer, modify the information so that the next time you find it faster
Steps:
The first step is to find the root node.
The second step is to modify all nodes on the lookup path and point them to the root node.
(1) Prim algorithm:
Basic idea:
Any vertex added to the spanning tree;
In those one endpoint in the spanning tree, the other endpoint is not in the edge of the spanning tree, and the least-weighted edge is added to the spanning tree with the other endpoint.
Repeat the previous step until all the vertices have entered the spanning tree.
The prim algorithm is generally suitable for dense graphs. Because the prim algorithm is based on the algorithm of the node, the number of edges is not much related to the cost of the algorithm.
(2) Kruskal algorithm
The basic idea of Kruskal algorithm:
For all sides from small to large sort;
In turn, the edge and its endpoint are added to the spanning tree, and if no circle is created after joining the Edge, the edge and its endpoints are added to the spanning tree; otherwise, it is deleted;
It is terminated until the n-1 edge is in the spanning tree.
Time complexity of the algorithm O (eloge)
The edge is determined by the weight from small to large, and if the current edge does not produce a ring, then the current edge is used as an edge of the spanning tree.
The resulting result is the minimal spanning tree.
General and set a piece of use.
The Kruskal algorithm is generally used for sparse graphs, and Kruskal is an algorithm for edges, so the fewer edges the better.
And the minimum spanning tree is usually used to connect the graph, to find the minimum cost or to require a minimum of several paths to be repaired. That's the part where the road has been repaired.
(3) Dijkstra algorithm
Divide the vertex set V into two groups:
(1) S: The set of vertices that have been calculated (initially containing only the source point V0)
(2) V-s=t: Set of vertices not yet determined
Add the vertices in t to s in ascending order, guaranteeing:
(1) The length of the V0 from the source point to the other vertices in S is less than the shortest path length from V0 to any vertex in t
(2) Each vertex corresponds to a distance value s in the vertex: from V0 to the length of this vertex t in the vertex: from V0 to this vertex only includes the shortest path length of the vertex in S as the middle vertex
According to: can prove V0 to T in the vertex VK, or from V0 to VK direct path of the weight, or from the V0 through the vertex of S to VK the sum of the path weights
Find the shortest path step
The algorithm steps are as follows:
G={v,e}
1. Initial seasonal s={v0},t=v-s={remaining vertex},t The distance value of the vertex corresponding to the <v0,vi>,d (V0,VI) is the value of the <V0,Vi> arc if there is no < V0,vi>,d (V0,VI) is ∞
2. Select a vertex w with the least weight associated with the vertex in S from T and add to S
3. Modify the distance value of vertices in the remaining T: If you add a W as the middle vertex and the distance value from V0 to VI is shortened, modify the distance value to repeat steps 2, 3, until s contains all vertices, i.e. w=vi
(4) Bellman-ford algorithm:
The Bellman-ford algorithm is a single-source shortest path algorithm with negative weights, which is very inefficient, but the code is easy to write. The principle of continuous relaxation (the original text is so written, why to call relaxation, the controversy is very large), each time the relaxation of each edge is updated, if the n-1 can be updated after the relaxation, the picture has a negative ring, so the results can not be obtained, otherwise it is completed. The Bellman-ford algorithm has a small optimization: Each relaxation is preceded by an identity flag, the initial value is false, if there is an edge update is assigned to True, and eventually if False or successful exit. The Bellman-ford algorithm wastes a lot of time to do unnecessary relaxation, while the SPFA algorithm is optimized with the queue, the effect is very significant, high efficiency is unimaginable. SPFA also has SLF, LLL, scrolling arrays and other optimizations.
Algorithm Description:
1,. Initialize: The shortest distance estimate of all vertices except the source point d[v]←+∞, d[s]←0;
2. Iterative solution: Repeatedly to the edge set E each edge of the relaxation operation, so that the minimum distance of each vertex v in the vertex set V of the shortest range estimate to gradually approximate its shortest distance; (run |v|-1 times)
3. Test the negative power loop: Determine whether the two endpoints of each edge in the edge set E converge. If there are non-convergent vertices, the algorithm returns false to indicate that the problem is not solved, otherwise the algorithm returns true and the shortest distance from Vertex v at the source point is saved in d[v].;
Note that the Dijkstra algorithm cannot handle the case with negative weights, and the Bellman-ford algorithm is needed at this time.
Hurriedly the vast beginning, and hurried to the end of the boundless, in this more than 100 days really learned a lot about the knowledge of the algorithm, I believe in the future programming will also have a great help, I hope this summary can also help you better understand the ACM algorithm!
ACM Course Summary