ACM Summary Report
Algorithm design
Name: Guo Jia
Study No.: 2015590
Major: Network Engineering Class II
Instructor: Feyuqui .
First heard that ACM this thing is our computer introduction teacher Yeu Xun Teacher introduced to me, he can be regarded as my computer's Enlightenment teacher, took me into the computer this big world. He let me know a good programmer is like a "martial arts master", need to Master "internal" and "drilling", drilling refers to the programming language, such as C, C + +, Java, Python. and internal strength refers to algorithms, data structures, design patterns and so on. This "internal strength" and "drilling" are indispensable. He also told me that life can not be impatient, to know the skills, some programmers, only to learn "drilling", and not to cultivate "internal strength", the result is that although mastered many fashionable programming language, but can not write a good program, or will simply repeat others to write the package and library, but not to create, He told me that some time must seize the opportunity, a step forward to success, and in a time of recess, he gave me a chance, he said that the cost of the teacher's ACM for you a challenge is also an opportunity, choose not to see you, this is really good for you, you can refer to, Of course, I will never persecute you. After listening to Miss Yue's words, I went back immediately to find out what ACM and ACM Know about the ACM International College Program Design Contest (full name: ACM International Collegiate Programming Contest (ACM-ICPC or ICPC)) is an annual competition sponsored by the American Computer Association (ACM) to showcase college students ' creativity, teamwork and ability to write procedures, analyze and solve problems under pressure. ACM-ICPC represents each school in the form of a team, with a team of up to 3 members. Each member must be a student in the school, there is a certain age limit, and can participate in the maximum of 2 stations per year in the regional trials in recent years, China's major universities gradually began to attach importance to the ACM Program Design Competition, the final finals will appear in our country players shadow. China's universities, which are represented by Shanghai Jiao Tong University, Tsinghua Univ., Beijing, Zhejiang, and Fudan universities, have win in the finals for our country and have also established my position in the world.
And I learned that our school as one of the key provincial universities in Shandong Province, ACM level in the province to maintain the first five of the level, although the agricultural university, but in this science and engineering competition also has an absolute voice and status. and through seniors and computers, math students understand that, fee teacher as a computer department is the professor, have their own lecture style and teaching form, pay more attention to the internal teaching, teaching eclectic, his curriculum, can be said very difficult, but like Miss Yue said, is definitely an opportunity is also a challenge, After knowing all this, I have no choice but to move forward!
I chose this course for the training of my own algorithmic ability and thinking ability, to explore ways to solve problems, to understand the nature and intrinsic mechanism of a program's operation, and to support its framework. In order to really hold the program in their own hands. Perhaps their ability is not enough, learn less things, but I feel that the course I did not disappointed, a lot of harvest. At the beginning of the time felt 16 weeks is very long, the blink of an eye 4 topics one over, 16 weeks has been drifting, and this elective course is over. For this course to learn, think carefully, some ideas, some methods, some strategies, not aware of all over the mind.
1. Greedy algorithm
Greedy algorithm (also known as greedy algorithm) refers to, in the problem solving, always make the best choice at present. In other words, not considering the overall optimality, he makes a local optimal solution in a sense. It is worth noting that the greedy algorithm is not completely not available, once the greedy strategy has been proved, it is an efficient algorithm.
Greedy algorithm is also a very common algorithm, this is because it is simple, the construction of greedy strategy is not very difficult.
From my understanding, there are two kinds of greedy problems: knapsack problem, interval problem.
Knapsack problem belongs to the relatively simple type, class teacher said the example is this type, this kind of problem either by the number of items as standard, or cost-effective standard, through a for loop can be solved. The key to solving knapsack problem with greedy method is how to select the greedy strategy, make the selection of each item in a certain order, and pack it into the backpack as much as possible, knowing that the backpack is filled. There are at least three seemingly appropriate greedy strategies.
(a) Select the most valuable items, because this can increase the total value of the backpack as soon as possible, but, although each step of the choice to obtain a great increase in the value of the backpack, but the backpack capacity is likely to consume too fast, so that the number of items loaded into the backpack is reduced, which does not guarantee that the objective function is maximized.
(b) Select the lightest weight item, as this can be loaded into as many items as possible, thus increasing the total value of the backpack. However, although each step of the selection makes the capacity of the backpack slow, but the value of the backpack does not guarantee rapid growth, so that the objective function is not guaranteed to reach the maximum.
(c) The above two greedy strategies either consider only the growth of the backpack value, or only consider the consumption of knapsack capacity, and in order to obtain the best solution of knapsack problem, we need to find a balance between knapsack value growth and backpack capacity consumption. The right greedy strategy is to choose items with the highest value per unit weight.
Interval problem is a difficult point, problema and other questions are representative of this kind of problem, the characteristics of interval problem is more difficult to find a standard to find the best solution, the standard can not be found, the problem is difficult to solve, take problema as an example, the key to solve the problem is to understand the serial and parallel, And the total time of the section with the most times as the best solution.
The following is a simple sample code of Greed:
A is the input set of the problem, which is the candidate set
Greedy (A)
{
s={}; Initial solution set as empty
while (not solution (s))//collection S does not constitute a solution to the problem {
x = select (A); Greedy selection in candidate set a
Iffeasible (S, x)//Determine if the solution after adding X in the set S is feasible s =s+{x};
A =a-{x};} return S;}
2. Search
The generalized search algorithm is divided into four large categories, binary search algorithm (binary-search), three-point searching algorithm (ternary search), depth-first search algorithm (DFS), breadth-First search algorithm (BFS). The first two are the solution of the given formula, and the latter two are the solutions for a given graph.
(a) Two-point search algorithm:
Binary search is mainly for the monotone function given the value of the function, the case of the value of the argument, it is very simple, the advantages are less than the number of comparisons, the search speed, the average performance is good, the basic idea of binary search is to divide n elements into roughly equal two parts, take A[N/2] and X, if X=A[N/2], then find X The algorithm aborts, if X<A[N/2], as long as the left half of the array a continues to search for x, if X>A[N/2], as long as the right half of the array a search for x. It is important to note that not all non-monotonic functions can be used in two points, sometimes combined with derivatives. The monotonicity of functions can be obtained and the extremum is obtained. For example, 1002, you need to first take a derivative of a given function, then two points, and then solve.
(ii) Three-point search algorithm:
The three-point lookup is mainly for the convex function given the function value, the case of the value of the argument, is larger, is on the basis of the two points, a certain interval of a second-order algorithm. Known left and right endpoint L, R, where the white point is required to be found.
Idea: The white point is infinitely approximated by shrinking the [L,r] range.
Practice: First Take [L,r] midpoint Mid, then take [mid,r] midpoint mmid, by comparing F (mid) and F (mmid) size to narrow the range. When the last l=r-1, compare the values of these two points, we find the understanding.
(iii) DFS:
The depth-first algorithm is one of the graph algorithms, and the process is simply to drill down into each possible branching path to no further depth, and each node can be accessed only once. The process is, in brief, deep into each possible branching path and can only be accessed once per node.
The depth-first traversal graph is based on a vertex V:
(1) Access vertex v;
(2) starting from the inaccessible adjacency point of V, the depth-first traversal of the graph is carried out, until the vertices with the paths in the diagram are accessed;
(3) If there are still vertices in the graph that are not accessed, the depth-first traversal is performed from an unreachable vertex until all the vertices in the graph have been accessed.
The original understanding of Dfs is only limited to the diagram, and now it is only the most basic. DFS more represents a state, and then uses a very simple thought to try again and again, each attempt succeeds, go deep into a layer of recursion for the next attempt, until the subsequent attempt to show that the failure will not succeed, then back here. Cancel this attempt and try another operation. To put it simply, it's a mob search. Just use the recursive return to achieve the attempt to fail the backtracking, so as to make a new attempt.
(iv) BFS:
From the point of view of the algorithm, all the sub-nodes resulting from the expansion node will be added to a FIFO queue, each time the first element of the team is checked, if the conditions are met, then stop, if not meet on the elimination, loop test, know the queue is empty position. BFS has a lot of applications in solving the shortest path or the shortest number of steps. The most applied is in the maze, such as the topic of chess, is the use of this idea.
The following is an example framework for breadth search:
Whilenot Queue.empty ()
Begin
Can add end condition
TMP = Queue.top ()
Extending the next status from the TMP loop next
If status Next legal then
Begin
Generate a new status next
next.step= Tmp.step + 1
Queue.pushback (Next)
End
Queue.pop ()
End
The following is an example framework for deep search:
Recursive implementations:
Functiondfs (Int Step, current state)
Begin
Can add end condition
Extend the next state from the current state loop next
If status Next legal then
Dfs (Step + 1, Next))
End
Non-recursive implementations:
Whilenot Stack.empty ()
Begin
tmp= Stack.top ()
Extend the next non-expanding status from TMP next
If There is no non-expanding state (reaching the leaf node) then
Stack.pop ()
ElseIf status Next legal then
Stack.push (Next)
End
3. Dynamic planning
Dynamic programming programming is a way to solve the optimization problem, a method, not a special algorithm. Unlike search or numerical calculations, there is a standard mathematical expression and a clear and unambiguous method of solving the problem. Dynamic programming design is often aimed at an optimization problem, because of the different nature of various problems, the conditions for determining the optimal solution are not the same, so the dynamic programming method has its own characteristic problem solving method for different problems, and there is not a universal dynamic programming algorithm, which can solve all kinds of optimization problems.
Dynamic programming problem computationally large, usually in advance to list all possible problems and saved to the table, in accordance with the key value of the table, this method is characterized by sacrificing space for time, for a simple example, the Fibonacci sequence, F (10), 11 operations required, F (11), 12 operations required, if the formula F (n) = f (n-1) + f (n-2) is required to calculate the 11+12=23 times, but if you save F (10), calculate f (11), only on the basis of F (10) to operate again, that is, 11+1=12 times. Visible, this approach is very time-saving.
Dynamic programming problem, compared to greedy and search, a significant feature is that the code is very small, a greedy or search problems, there will generally be fifty or sixty lines of code, and a DP problem of code, often more than 10 lines or even shorter, so that the real core is then a row of recursive formula. But this does not mean that DP problem is simple, do DP problem need very strong logical thinking and thinking ability (thinking is very important, the idea is good, a day can be AC 5 questions, if the idea is not good, a day may also do not come out), need to have a logic problem into the characteristics of the ability of the brain, in short, do DP , is really the "brain" to do the problem, not like a search, by "template" to nest.
Dynamic planning is difficult to plan. For topics within the topic, obfuscation tends to be 0. But in the topic, as the teacher said, after reading the topic, it is difficult to think about dynamic planning. So it will be hard to find, but when you think of a problem that might be a dynamic plan, the answer is basically right. Dynamic planning is difficult, difficult in the problem of dry, difficult in the confusing strong. Dynamic planning is also very simple, a single abstraction, a single method, a single loop, and so on, so you can say: If a very strong dynamic programming problem, you happen to think of a dynamic planning and try to do, then in fact you have solved the problem. Dynamic planning has a very fixed routine, just follow the routines, and then modify the details according to the test instructions, it is completely no difficulty.
Dynamic planning is ubiquitous in reality, otherwise the topic will not be so confusing. When solving problems in the real world, once you feel the impossible, try the dynamic planning Strategy mode, which can often hit the enemy and get the answer.
A general recursive relationship for dynamic programming problems:
F[a][b]=max (F[a-1][b],f[a][b-1]) +coin[a][b]
(This step is the value of the recursive definition of the optimal solution, which lists the state transition equation)
The general problem solving steps of dynamic programming
1, judge whether the problem has the best sub-structural properties, if not have the dynamic planning can not be used.
2, divide the problem into several sub-problems (phased).
3, establish the state transfer equation (recursive formula).
4, find out the boundary conditions.
5. Bring the known boundary value into the equation.
6, recursive solution.
The sample code for the longest ascending subsequence example is as follows:
$include <iostream>
Uisngnamespace std;
Intb[max_n + 10];
Intamaxlen[max_n + 10];
Intmain ()
{
int I, j, N;
scanf ("%d", & N);
for (i = 1;i <= n;i + +)
scanf ("%d", & B[i]);
AMAXLEN[1] = 1;
for (i = 2; I <= N; i + +)
{//The length of the longest ascending subsequence with the number of I as the end point
int ntmp = 0; Record the maximum length of the left sub-sequence of number I
for (j = 1; j < I; J + +)
{//search for longest ascending subsequence length with end of number of left of number I
if (B[i] > B[j])
{
if (Ntmp < amaxlen[j])
Ntmp = Amaxlen[j];
}
}
Amaxlen[i] = ntmp + 1;
}
Intnmax =-1;
for (i = 1;i <= n;i + +)
if (NMax < amaxlen[i])
NMax = Amaxlen[i];
printf ("%d\n", NMax);
return 0;
}
4. Graph theory
Graph theory (graph Theory) is a branch of mathematics. It takes the graph as the research object. A graph in graph theory is a graph of a given point and a line connected to two points, which is usually used to describe a particular relationship between something, a point representing something, and a line connecting two points to indicate that the corresponding two things have this relationship.
Graph theory This topic I feel is not general difficult, although has the template has the algorithm, the problem solving difficulty is still very big. First of all, the teacher is talking about the image of the edge and point of storage methods, one is the use of two-dimensional array of adjacency matrix, but the limit is not a lot of points, edges can be dense graph. The other is the adjacency table, the standard should be used pointer-type chain list storage, in order to facilitate and run speed, modify the structure of the array to store, run faster, and more convenient, it is the limit is not too much, points can be a lot of figures. The same element is placed in a set and represented by an array. Of course, there are merge sets and find numeric operations, all of which have a fixed template method.
The following is the main course, altogether two. The first main course is the smallest spanning tree. Minimum spanning tree definition: one of the least-weighted edge sets in all spanning trees is the minimum spanning tree, which determines the problem of tree T as the minimum spanning tree problem. There are two ways to solve the problem, one is the prim algorithm: one vertex is added to the spanning tree, and the other endpoint is in the spanning tree, the other end is not in the spanning tree edge, the least-weighted edge is added to the spanning tree. Repeat the previous step until all the vertices have entered the spanning tree. One is the Kruskal algorithm: The order of all the edges from small to large, and then tempted to add the edge and its endpoints to the spanning tree, and if the edge is joined without a circle, the edge and its endpoints are added to the spanning tree; otherwise, it will be deleted until the n-1 edge is found in the spanning tree. The time complexity of the algorithm O (eloge). The prim algorithm simply explains that there are 2 sets, one is the collection of points taken, one is the collection of points that are not taken, according to the weight of the edge to never take the collection to find points, and then put this point in the collection, until the collection is empty. Seemingly simple operation, the actual attention to the details of a lot, the most important thing is based on the point of the collection point and the point of the collection points of the relationship to find weights, and then to merge operations. For the Kruskal algorithm is relatively single, the first is to look at the weight of the edge, let the weights from large to small arrangement, and then take the relatively smallest side, add, judge, until the generation of Unicom map.
The second main course is the shortest circuit problem, simply to find the route on the map, from the starting point to the best route to the destination. One algorithm is the Dijkstra algorithm: Set up a set of s to store the vertices that have found the shortest path, the initial state of S contains only the source point V, the vi∈v-s, assuming the forward edge from the source point V to VI is the shortest path. Each subsequent to a shortest path V, ..., VK, will be VK into the collection s, and the path V, ..., VK, vi compared with the original hypothesis, the shortest path length is taken. Repeat the process until all vertices in the set V are joined to the set S. This algorithm is similar to the prim algorithm of the minimum spanning tree, only the difference is asked. Dijkstra algorithm has a great disadvantage is that if the weight is negative, then it can not be achieved, so must be selected by the problem. Another algorithm is the Bellman-ford algorithm: Bellman-ford algorithm constructs a shortest path length array sequence dist 1 [u], dist 2 [u], ..., Dist n-1 [u]. which
Dist1 [u] is the shortest path length of only one edge from the source Point V to the end of U, and has dist 1 [u] =edge[v][u];
Dist2 [u] is the shortest path length from the source Point v up to two edges to the end point U;
DIST3 [u] is the shortest path length from the source Point v up to the end of the three edges that do not constitute a negative weight loop;
......
distn-1 [u] is the shortest path length from the source Point v up to the end of the n-1 edge that does not constitute a negative weight loop;
The ultimate goal of the algorithm is to calculate the shortest path length of Dist n-1 [u], which is the source point v to the vertex U.
The difference between the Dijkstra algorithm and the Bellman algorithm: Dijkstra algorithm in the solution process, the source point to the set S the shortest path of each vertex once found, then unchanged, modified is only the source point to T set the shortest path length of each vertex. Bellman algorithm in the solution process, each cycle will have to modify all the vertices of the dist[], that is, the source point to each vertex shortest path length has been to Bellman algorithm end to determine down.
There is also a SPFA algorithm, which is the optimal implementation of the bellman algorithm, generally more commonly used: 1. Queue q={s}2. Remove the team head u, enumerate all U's pro side. If D (v) >d (u) +w (u,v) is improved, pre (v) =u, because D (v) is reduced, V may improve the other points later, so if V is not in Q, then the V is enqueued. 3. Iterate 2 until the queue Q is empty (normal end), or there is a >=n (negative circle) for the number of points to be queued.
Generally used to find the negative circle (more efficient than Bellman-ford), sparse graph of the shortest.
These algorithms have a template code, but relative to the previous talk, is still very complex, especially according to the topic and the implementation of the limitations of some details, that is to say, perhaps the reason for their new contact, not familiar with, I would like to do more questions should be able to thoroughly grasp, just learn the knowledge should have a precipitate process.
And check the sample code for the problem:
Find3 (x)
{r= x;while (Set[r] <> R)//loop ends, the root node is found
r = Set[r]; i = x;
while (I <> r)//This loop modifies all nodes in the Find path {
j = Set[i];
Set[i] = r;
i = j;}
}
Dijkstra algorithm-pseudo-code is as follows:
1. Initialize the array dist, path, and S;
2.while (number of elements in S <n)
2.1 in Dist[n] to find the minimum value, its subscript is k;
2.2 Output DIST[J] and path[j];
2.3 Modifying the array dist and path;
2.4 Add the vertex VK to the array s;
Finally, thank you teacher for a semester of teaching, although the ability is limited, but still learned a lot of things, no regrets to learn to repair acm!
ACM Summary Report!