Algorithm one: Fast sorting algorithm
Fast sequencing is a sort algorithm developed by Donny Holl. In average, sort n items to be 0 (n log n) times compared. In the worst-case scenario, 0 (N2) comparisons are required, but this is not a common situation. In fact, fast sorting is typically faster than the other 0 (n log n) algorithms because its internal loops (inner Loop) can be implemented efficiently on most architectures.
Quick sort using divide-and-conquer method (http://www.aliyun.com/zixun/aggregation/3736.html ">divide" and Conquer) policy to divide a serial (list) into two sub serial (sub-lists).
Algorithm steps:
1 pick an element from a sequence called a "benchmark" (pivot),
2 reorder sequence, all elements smaller than the base value placed in front of the datum, all elements than the base value of the large pendulum behind the benchmark (the same number can be on either side). After the partition exits, the datum is positioned in the middle of the series. This is called a partition (partition) operation.
3 recursively (recursive) sorts a subsequence smaller than the datum element and a subsequence larger than the datum value element.
At the bottom of recursion, the size of the sequence is 0 or one, which is always sorted. Although it has been recursive, the algorithm always exits, because in each iteration (according) it will at least put an element in its final position.
Algorithm two: Heap sorting algorithm
Heap Ordering (heapsort) is a sort algorithm designed by using the data structure of a heap. Accumulation is the structure of an approximate complete binary tree, and it satisfies the nature of the accumulation: the key value or index of a child node is always less than (or greater than) its parent.
The average time complexity of heap ordering is 0 (n).
Algorithm steps:
Create a heap h[0..n-1]
Swap the top of the heap (max.) and the heap tail
3. Reduce the heap size by 1 and call Shift_down (0) to adjust the new array top data to the corresponding position
4. Repeat step 2 until the heap size is 1
Algorithm Three: Merge sort
Merging sort (merge sort, Taiwan translation: Merging sorting) is an efficient sorting algorithm based on merging operations. The algorithm is a very typical application of the partition method (Divide and Conquer).
Algorithm steps:
1. Application space so that its size is the sum of two sorted sequences, which are used to store the merged sequence
2. Set two pointers, initially at the beginning of two sorted sequences
3. Compare the elements pointed to by the two pointers, select the relatively small elements into the merged space, and move the pointer to the next position
4. Repeat step 3 until a pointer reaches the end of the sequence
5. Copy all remaining elements of another sequence directly to the end of the merge sequence
Algorithm for 4:2-point search
The binary lookup algorithm is a search algorithm that finds a particular element in an ordered array. The search process starts with the middle element of the array, if the intermediate element is exactly the element to find, the process ends, and if a particular element is greater or less than the middle element, it is found in the half of the array greater than or less than the middle element, and is compared to the beginning as the middle element. If an array of steps is empty, the representation cannot be found. Each comparison of this search algorithm narrows the search range by half. Binary search reduces the search area by half each time, with a complexity of 0 (n).
Algorithm five: BFPRT (linear lookup algorithm)
BFPRT algorithm solves the problem is very classic, that is, from the sequence of n elements to select the K-Large (k-small) elements, through the ingenious analysis, BFPRT can be guaranteed in the worst case is still linear time complexity. The idea of the algorithm is similar to the idea of fast ordering, of course, in order to make the algorithm can still achieve the time complexity of O (n) in the worst case, the five-bit algorithm author has done exquisite processing.
Algorithm steps:
1. Divide n elements into N/5 (upper bound) groups, each 5.
2. Remove the median number of each group, any sort of sorting method, such as insertion sort.
3. Recursive call selection algorithm finds the median of all median digits in the previous step, set to X, and even several median digits to select the middle one.
4. x to split the array, set less than or equal to x number of K, is greater than the number of x is n-k.
5. If i==k, return x; if i<k, find the small element recursively in elements less than x, and if i>k, recursively look for the i-k small element in elements greater than X.
Termination condition: When N=1, the return is the I small element.
Algorithm VI: DFS (depth first search)
Depth-First search algorithm (Depth-first-search) is one of the search algorithms. It traverses the tree's nodes along the depth of the tree, searching the branches of the tree as deep as possible. When all the edges of node v have been explored, the search goes back to the starting node of the edge where node V was found. This process continues until all nodes available from the source node have been discovered. If there are also nodes that are not found, select one of them as the source node and repeat the process, and the entire process repeats until all nodes are accessed. Dfs belongs to blind search.
Depth-First search is a classical algorithm in graph theory, which can produce corresponding topological sort table of target graph by using depth-first search algorithm, and can solve many related graph problems conveniently, such as maximum path problem and so on by using topological sort table. In general, a heap data structure is used to aid the implementation of DFS algorithm.
Depth first traversal diagram algorithm steps:
1. Access vertex v;
2. The depth-first traversal of the graph is carried out sequentially from the inaccessible adjacency point of V, until the vertex with the path connection in the graph is accessed;
3. If a vertex is not accessed at this point, the depth first traversal is performed from an unreachable vertex until all vertices in the graph are accessed.
The above description may be more abstract, for example:
After accessing a vertex v in the graph, DFS starts from V to access any of its adjacency vertex W1, and then from W1, accesses the vertex w2 adjacent to the W1 but has not been visited before, and then embarks from the W2, carries on the similar visit, ... This goes on until you reach the vertex u, where all the adjacent vertices are accessed.
Then, take a step back and retreat to the vertex you just visited before to see if there are any other contiguous vertices that are not accessed. If so, access the vertex, and then proceed from the vertex to the same access as described above; Repeat the process until all vertices in the connected graph are accessed.
Algorithm Seven: BFS (breadth First search)
Breadth-First search algorithm (Breadth-first-search) is a graphical search algorithm. Simply put, the BFS is the node that starts at the root node and traverses the tree (figure) along the width of the tree (graph). If all nodes are accessed, the algorithm aborts. BFS also belong to blind search. In general, the queue data structure is used to aid the implementation of BFS algorithm.
Algorithm steps:
1. First put the root node into the queue.
2. Remove the first node from the queue and verify that it is the target.
If the target is found, the search is completed and the result is returned.
Otherwise, all of its direct child nodes that have not yet been validated are added to the queue.
3. If the queue is empty, it means that the whole picture has been checked-that is, there is no goal to search for in the picture. End the search and return "no targets found".
4. Repeat step 2.
Algorithm VIII: Dijkstra algorithm
The Dykstra algorithm (Dijkstra ' algorithm) was presented by the Dutch computer scientist Eizhor Dextra. Dierkes algorithm uses breadth-first search to solve the problem of single source shortest path of nonnegative weighted graph, and finally obtains a shortest path tree. This algorithm is often used in routing algorithms or as a sub module of other graph algorithms.
The input of the algorithm consists of a power-heavy direction graph G, and a source vertex S in G. We use V to represent the set of all vertices in G. The edges of each diagram are ordered pairs of elements formed by two vertices. (U, v) indicates a path connected from the vertex u to v. We use E to denote the set of all the edges in G, while the weights of the edges are defined by the weight function w:e→[0,∞]. Therefore, W (U, v) is the nonnegative weight (weight) from vertex u to vertex v. The weight of an edge can be imagined as the distance between two vertices. The weight of the path between any two points is the sum of the weights of all the edges on the path. It is known that there are vertex s and T,dijkstra algorithms in V that can find the lowest weight path of S to T (for example, shortest path). This algorithm can also find the shortest path from a vertex s to any other vertex in a graph. The Dijkstra algorithm is the fastest single source shortest path algorithm, which is not included in the negative right direction graph.
Algorithm steps:
1. The distance value corresponding to the vertices in the},t of the remaining vertices in the initial seasonal s={v0},t={
If <v0,vi>,d (V0,VI) is present on the value of the <V0,Vi> arc
Without <v0,vi>,d (V0,VI) as ∞
2. Select a vertex w with a minimum distance value from T and not in S, add s
3. Modify the distance value of the vertices in the remaining T: If you add the W as the middle vertex and the distance from V0 to VI is shortened, modify the distance value
Repeat steps 2 and 3 above until s contains all vertices, that is, W=vi
Algorithm nine: Dynamic programming algorithm
Dynamic programming is a method used in mathematics, computer science, and economics to solve complex problems by decomposing the original problem into relatively simple child problems. Dynamic programming is often applied to problems with overlapping sub problems and optimal substructure properties, and dynamic programming methods often consume much less time than simple solutions.
The basic idea behind dynamic programming is very simple. In general, to solve a given problem, we need to solve the different parts (that is, the child problem), and then merge the solution of the child problem to obtain the solution of the original problem. Often many child problems are very similar, and for this reason the dynamic programming approach attempts to solve only one problem at a time, thus reducing the computational load: Once a given child problem has been worked out, it is stored in memory so that the table can be checked directly the next time the solution of the same child problem is needed. This approach is particularly useful when the number of duplicate child issues is exponential in relation to the size of the input.
The most classical problem of dynamic programming is the knapsack problem.
Algorithm steps:
1. The best substructure property. If the solution of the sub problem contained in the optimal solution of the problem is also optimal, we will say that the problem has the best substructure property (i.e., satisfying the optimization principle). The optimal Substructure property provides an important clue for the dynamic programming algorithm to solve the problem.
2. The overlapping nature of child issues. The overlapping property of sub-problem refers to the problem solved by the recursive algorithm from top to bottom, each child problem is not always a new problem, and some sub problems are repeatedly computed. The dynamic programming algorithm takes advantage of the overlapping nature of the seed problem, calculates only once for each child problem, and then saves its results in a table, and when it is time to calculate the child problem that has already been computed, simply view the results in the table to achieve higher efficiency.
Algorithm ten: Naive Bayesian classification algorithm
Naive Bayesian classification algorithm is a simple probability classification algorithm based on Bayesian theorem. Bayesian classification is based on probabilistic reasoning, that is, when the existence of various conditions is uncertain, only know its occurrence probability, how to complete the reasoning and decision-making tasks. Probabilistic inference is corresponding to deterministic inference. The naive Bayesian classifier is based on independent assumption, that is, each feature of the sample is not related to other features.
Naive Bayesian classifier relies on the accurate natural probability model, and obtains the very good classification effect in supervised learning sample concentration. In many practical applications, the parameter estimation of naive Bayesian model uses the maximum likelihood estimation method, in other words, naive Bayesian model can work without using Bayesian probability or any Bayesian model.
Despite these simple ideas and simplistic assumptions, the naive Bayesian classifier is still able to achieve fairly good results in many complex reality situations.
Original link: http://cricode.com/2001.html