This article is in the reading of Aditya Bhargava algorithm diagram of a book to do the summary, the text part of the code refers to the source code, in this thank Aditya Bhargava made such a simple case, the basic algorithm interested friends can read the original. Because I am also a beginner of programming, so this book is relatively easy to understand, the introduction of the algorithm with illustrations are also very easy to understand, here are just a few of the most basic algorithms to help straighten out some simple thinking logic.
Introduction to Algorithms
An algorithm is a set of instructions to complete a task. Any code fragment can be considered an algorithm, and the algorithm we are discussing here is either fast or can solve interesting problems or both.
Two-point Search
Binary lookup is an algorithm whose input is an ordered list of elements, and if the element to be found is contained in the list, the binary lookup returns its position; otherwise null is returned.
Dichotomy is good to understand, if you can guess the value specified within 100, how to do with the fewest number of times to find. Some might think it might be possible to find it at one time, but the worst may be to guess 100 times.
This kind of problem uses the dichotomy method to be very simple, each time takes the middle value to narrow the search scope, thus must be able to find the answer within 7 steps.
If the problem expands to 400 million, there is no doubt that the dichotomy will be excellent. In general, for a list of n elements, it takes up to log2n to find a binary, while a simple lookup requires up to n steps.
Dichotomy is a bit of a thing to find Fast, but only if the list is ordered, the binary lookup works.
The code for this guessing number game uses the dichotomy idea to do the following:
#二分法def (Lists,item): low=0 High=len (lists)-1 while low<= High: mid= (low+high)//2 guess =lists[mid] if Guess==item: return "Guess is%s"%guess elif guess > item: high=mid-1 else: low=mid+1 return nonelists=[1,2,4,6,8]print (lists,8) print (lists,3) Run result: guess is 8None
Large O notation
The large O notation is a special notation that shows how fast the algorithm is. Because of the different running time of different algorithms, it is more scientific and intuitive to use the large O notation to see the time growth.
For example, suppose a list contains n elements. Simple lookups need to examine each element, so n operations need to be performed. Using the large O notation, this run time is O (n). The reason is called large o notation, because there is a big O before the operand ... It's true.
The run time for simple lookups is always O (n). When the phone book looks for Adit, it is found at one time, which is the best case, O (1), but the Big O notation is the worst case. So, you can say, in the worst case, you have to look at each entry in the phone book, and the corresponding run time is O (n).
Some common large o-run times
? O (log n), also called logarithmic time, such an algorithm includes a binary lookup.
? O (n), also called linear time, such algorithms include simple lookups.
? O (n * log n), such algorithms include a quick sort.
? O (N2), such algorithms include the choice of sorting.
? O (n!), such algorithms include a solution to the travel quotient problem that will be introduced next.
The time required to draw a grid of 16 grids using these algorithms is as follows:
Speed from fast to slow, of course only for this problem.
? The speed of the algorithm refers not to the time, but to the growth of the operand.
? When we talk about the speed of the algorithm, we say that as the input increases, its run time will increase at what speed.
? The running time of the algorithm is represented by the large O notation.
? O (log n) is faster than O (n), and the more elements that need to be searched, the more quickly the former is faster than the latter.
Travel business Issues
This really bothers a lot of people, a traveler to go to 5 cities, how to ensure the shortest journey, 5 cities have 120 different arrangements. When it comes to n cities, you need to perform n! (factorial of N) operations to calculate the results. So the run time is O (n!), which is the factorial time.
Select sort
Many algorithms work only after the data has been sorted. Of course, many languages have built-in sorting algorithms, so you don't have to write your own version from scratch.
Arrays and Linked lists
When you need to store the data in memory, you request the computer to provide storage space, and the computer gives you a storage address. There are two basic ways to store multiple items of data-arrays and linked lists.
The memory in the array must be connected, which means that if the memory that is immediately attached to the element is occupied, then it is only possible to re-search for the contiguous address that can be accommodated, and if there is not such a long continuous address result, the computer also reserves space when the array is stored, you only have three memory, but I give you 10. Even if the location of the extra request may not be used at all, it will waste memory and you won't be able to use it. And after more than 10 to-dos, you have to move.
Elements in a list can be stored anywhere in memory. Each element of the list stores the address of the next element, which strings together a series of random memory addresses. Adding elements to a linked list is easy: Simply put it in memory and store its address in the previous element, as well as delete it. But the list on the read is significantly weaker than the array, to read the last memory of the content must be read sequentially to the last position, the array can be arbitrarily read in the middle of any position (because the first memory address can be introduced to the location of the number of addresses, they are continuous).
Operation run time of array and linked list
Arrays and linked lists which use more? Obviously it depends on the situation. But arrays are used much because they support random access, and in many cases require random access rather than sequential access.
Select sort
For example, NetEase cloud music should be based on the number of songs you listen to your favorite music, you can cycle the list every time, each time the highest number of music removed into the new list, until the original list is empty when the end. The total time is 1/2o (n**2), and the large O method omits the constant, so that is the time O (n**2).
Select the sorted code:
#O (N**2) def Low (arr): lowest=0 arrlow=arr[0] for i1 in range (1,len (arr)): if ARR[I1] < arrlow: Arrlow=arr[i1] lowest=i1 return lowestdef sor (arr): new_arr=[] for i in range (len (arr)): Smaller=low (arr) new_arr.append (Arr.pop (smaller)) return New_arrprint (SOR ([3,2,9,6,4])) operation results: [2, 3, 4, 6, 9]
Note: The element types of the same array must be the same.
Recursive
Recursion refers to calling your own function, which simply makes the solution clearer and has no performance advantage. In fact, in some cases, the performance of using loops is better. Leigh Caldwell said in a stack overflow: "If you use loops, the performance of your program may be higher; If you use recursion, the program might be easier to understand." How to choose what to see is more important to you. ”
Baseline conditions and recursion conditions
When you write a recursive function, you must tell it when to stop recursion. Because of this, each recursive function has two parts: a baseline condition (base case) and a recursive condition (recursive case). A recursive condition refers to a function calling itself, whereas a baseline condition means that a function no longer calls its own condition, thus avoiding an infinite loop.
#递归求阶乘def FAC (num): if num==1: return 1 else: return NUM*FAC (num-1) Print (FAC (5)) Run Result: 120
#递归叠加def AD (LIS): if lis==[]: return 0 else: return lis.pop (0) +ad (LIS) print (AD ([i])) Run Result: 6
#递归计数def num (LIS): n=0 if Lis ==[]: return n Else: lis.pop () n+=1 n+=num (LIS) return Nprint (num ([1,2,3,4,5])) Run Result: 5
#递归求最大值def Ma (LIS): m=lis[0] if Len (LIS) ==1: return m else: Tmp=ma (lis[1:]) if TMP > m: m=tmp return Mprint (MA ([7,3,10,4,6])) operation Result: 10
Heap and Stack
This concept everyone must be more clear, this is often used in two programming concepts, Heap is also called queue, refers to the FIFO, the stack is the opposite refers to the last-first-out, you can think of the Python nested function calls, the inner layer of the function is defined after the first execution, the completion of the return of the outer function, This calling function is the concept of the call stack. The specification says they only press in and eject two states.
The use of stacks also has some drawbacks, and storing detailed information can consume a lot of memory. Each function call consumes a certain amount of memory, and if the stack is high, it means that the computer stores a large number of function call information. Then you can only use loop completion or tail recursion (this advanced method I don't yet).
Quick sort divide and conquer
Suppose you want to divide a piece of land evenly into blocks and make sure the block is the largest. You can use the D&C policy. The d&c algorithm is recursive.
The process of using d&c to solve a problem involves two steps:
(1) To find out the baseline conditions, such conditions must be as simple as possible.
(2) The problem is constantly decomposed (or scaled down) until the baseline conditions are met.
According to the definition of d&c, each recursive invocation must reduce the size of the problem. The baseline condition of this problem is that the length of one edge is an integer multiple of the other. Take short edge as the benchmark, take the maximum value of the short edge *n on the long side, the remainder of the part in turn according to the above operation, until the last long edge is the short edge of the integer multiple position short Edge **2 is the largest square.
How the D&c works:
(1) Find a simple baseline condition;
(2) Determine how to reduce the size of the problem so that it meets the baseline conditions.
D&c is not an algorithm that can be used to solve problems, but rather a problem-solving approach.
Quick Sort
Quick sort uses the d&c. For a sort algorithm, the baseline condition is either empty for the array or contains only one element.
First, select an element from the array, which is called the base value;
Next, find the element that is smaller than the base value and the element that is larger than the base value.
The two sub-arrays are then sorted quickly until the baseline condition is met.
Note : Inductive proof is an effective way to prove the algorithm, it is divided into two steps: baseline conditions and induction conditions.
#快速排序def quicksrt (arr): If Len (arr) <2: return arr else: pio=arr[0] less=[i for I in arr[1:] if I <pio] than=[i for I in arr[1:] if I>=pio] return quicksrt (less) +[pio]+quicksrt (than) print (QUICKSRT ([ 4,5,7,2,3,9,4,0]) operation results: [0, 2, 3, 4, 4, 5, 7, 9]
The worst case of a quick sort is O (n**2), which is as slow as the selection sort, but his average sort time is O (n*log N). The merge sort is always O (n*log N). But this is not absolute, the constant of the merge sort is always greater than the quick sort, so it is generally considered faster to sort quickly.
Average situation and worst case
If you want to sort from small to large, the worst case is to select the first value each time as the base value, so that each operation time is O (n), the operation O (n) times, the algorithm run time is O (n) * O (n) = O (n**2). The best case is to choose the most intermediate number of rows at a time, as if the dichotomy is O (log n) (in technical terms, the height of the call stack is O (log n)), and the time required for each layer is O (n). So the entire algorithm takes a time of O (n) * O (log n) = O (n log n).
Hash list hash function
The hash function maps the input to a number. This is better understood in a Python dictionary, each time a key is given the same number, each key corresponds to a value.
? The hash function always maps the same input to the same index.
? The hash function maps different inputs to different indexes.
? The hash function knows how large the array is and returns only valid indexes.
When it comes to dictionaries, you may not need to implement a hash list at all, and any good language provides a hash table implementation. Python provides a hash table implementation that is a dictionary, and you can use the function dict to create a hash table.
So the concept of the hash table is very good understanding, the hash table is usually used to find, in the site vote can also filter out the votes have been cast, that is, to go to the heavy, but also for some frequently visited sites to cache also used a hash list. Caching is a common way of accelerating, and all large Web sites use caching, while cached data is stored in a hash table.
Conflict
The performance of the hash table is indirectly described, and the conflict is that the position assigned to the two keys is the same. There are many ways to deal with conflicts, the simplest of which is as follows: If two keys are mapped to the same location, a linked list is stored in this location. If all the values in a hash table are placed in the first memory, what is the difference between a list and a chain? Ideally, the hash function maps the keys evenly to different locations in the hash table. If a hash table stores a list that is long, the speed of the hash will drop sharply.
Performance
On average, the hash table performs various operations at an O (1) time. Let's compare the hash table with the array and the linked list.
Filling factor
The parameter used to describe the performance, which is the total number of elements/positions of the hash table. Filling factor greater than 1 o'clock means that the number of elements is greater than the number of positions, this time may be to consider adjusting the hash table length. It takes a long time to adjust the hash table length! You're right, the cost of adjusting the length is huge, so you don't want to do it frequently. On average, however, even considering the time required to adjust the length, the time required for the hash table operation is O (1).
Breadth First Search
If you want to go from point A to point B, this problem is called a shortest path problem requiring two steps.
(1) Use the diagram to establish the problem model.
(2) Use breadth-first search to solve problems.
The graph is made up of nodes and edges, which are used to simulate how different things are connected.
Breadth-First search is a lookup algorithm for graphs that can help answer two types of questions.
? First type of question: From Node A, is there a path to Node B?
? Second type of question: From Node A, which path to Node B is the shortest?
Breadth-first working principle diagram
To see if you know anyone who has a mango salesman, start by checking with a friend and add his friends to the end of your Find list queue until you're done checking or finding the first mango salesman. In this process for those who have been checked out alone, because repeated search meaningless or even lead to an infinite loop.
Note : An edge in a directed graph is an arrow, and the direction of the arrow specifies the direction of the relationship, for example, Rama→adit indicates that Rama owes Adit money. The edges of an ross-rachel graph have no arrows, and the relationship is bidirectional, for example, "Ross is dating Rachel, and Rachel is dating Ross."
Dixtra algorithm
or the shortest path algorithm, but he solves the shortest path of the weighted graph. In other words, in the Dixtra algorithm, you assign a number or weight to each segment, so the Dixtra algorithm finds the path with the smallest total weight.
The Dixtra algorithm consists of 4 steps.
(1) Find the "cheapest" node to reach in the shortest possible time.
(2) The cost of updating the neighbor of the node.
(3) Repeat the process until each node in the diagram has done so.
(4) Calculate the final path.
To calculate the shortest path in a non-weighted graph, you can use breadth-first search. To calculate the shortest path in a weighted graph, you can use the Dixtra algorithm.
It is important to note that the Dixtra algorithm applies only to the loop-free graph, and that the Dixtra algorithm cannot calculate negative-weight edges. Edges with negative weights are calculated using the Bellmannford algorithm (which I will not).
The following code implements the code for the Dixtra algorithm to calculate the shortest path.
# dick Tara algorithm for shortest path graph = {} #先描述距离graph ["start"] = {}graph["start"] ["a"] = 6graph["Start" ["B"] = 2graph["a"] = {}graph["a" [ "Fin"] = 1graph["B"] = {}graph["b"] ["a"] = 3graph["B" ["fin"] = 5graph["fin"] = {}# weight table infinity = float ("inf") costs = {}co Sts["a"] = 6costs["B"] = 2costs["fin"] = infinity# the parents tableparents = {}parents["a"] = "Start" parents["B"] = "Star" T "parents[" fin "] = noneprocessed = [] #已经算过的列表def find_lowest_cost_node (costs): Lowest_cost = float (" inf ") #终点无限大 Lowe St_cost_node = None # Traverse each node for nodes in costs:cost = Costs[node] # To determine size and not previously calculated if cost < Lowest_cost and node not in processed:lowest_cost = cost# If there is a smaller distance then update Lowest_cost_node = node ret Urn lowest_cost_node# the lowest-cost node that has not been processed. Nodes = Find_lowest_cost_node (costs) # loop ends when all nodes are processed while node was not none:cost = costs [Node] # All neighbors via node neighbors = Graph[node] for n in Neighbors.keys (): New_cost = Cost + neighbors[n] #从此节点计 Calculate the cost to the next node # if it's to go to thisNeighbor through this node cheaper if costs[n] > new_cost: # Update this node minimum costs[n] = new_cost # node becomes neighbor nearest next One node Parents[n] = node #节点标记为已处理 processed.append (node) #发现下一个节点与环 node = Find_lowest_cost_node (Cost s) print ("Cost from the start to each node:") print (costs) Run result: cost from the start to each node:{' a ': 5, ' fin ': 6, ' B ': 2}
Greedy algorithm
The greedy algorithm is very simple: every step takes the best approach, and the final result is the global optimal solution. Greedy algorithms do not work in any case, but they are easy to implement.
Use a simple example to explain. For example, with the following schedule, you should learn as much as possible in one classroom.
(1) The first lesson to be chosen is the one that is to be completed in this classroom.
(2) Next, you must choose a class that starts after the first class. Likewise, you choose to end the first class, which will be the second lesson in this classroom.
(3) Repeat the second step.
This is the greedy algorithm. Although the greedy algorithm is omnipotent, but he is often not optimal, but for some no better solution, greedy algorithm is often the most effective.
Collection Overlay Issues
Suppose you have a TV show you want to be released nationwide, but each TV station covers a different range, and there may be overlapping areas.
(1) List each possible set of radio stations, which is called power set. There are 2n possible subsets.
(2) In these sets, the smallest collection covering all 50 states of the United States is chosen. The problem is that it takes a long time to compute a subset of each possible broadcast station. The run time is O (2**n) because there are 2**n of possible collections.
Greedy algorithm can solve the crisis! The following greedy algorithm can be used to get a very close solution.
(1) elect such a radio station, that is, it covers the largest number of non-covered states. Even if the radio station covers some of the states that are covered, it doesn't matter.
(2) Repeat the first step until all the states are covered.
This is an approximate algorithm (approximation algorithm). Approximate algorithms can be used when the exact solution takes too long to obtain.
The criteria for judging the approximate algorithm are as follows:
? How fast is the speed;
? The approximate solution obtained is close to the optimal solution.
The algorithm code for this problem:
states_needed = Set (["Mt", "WA", "or", "id", "NV", "UT", "Ca", "AZ"]) stations = {}stations["kone"] = set (["ID", "NV", "UT" ]) stations["ktwo"] = set (["WA", "id", "MT"]) stations["kthree"] = set (["or", "NV", "CA"]) stations["kfour"] = set (["NV", "U T "]) stations[" kfive "] = set ([" Ca "," AZ "]) final_stations = set () while states_needed: best_station = None states _covered = set () for station, states in Stations.items (): covered = states_needed & states If Len (covered ) > Len (states_covered): best_station = Station states_covered = covered states_needed-= States_ Covered final_stations.add (best_station) print (final_stations) run result; {' Kone ', ' ktwo ', ' kthree ', ' kfive '}
The greedy algorithm can also find a simple answer to the traveling business problem.
The simple definition of NP complete problem is known as the problem of difficulty solving, such as the problem of traveling quotient and the problem of collection coverage. The NP algorithm itself is not difficult, but it is difficult to define which problems should be solved by using NP algorithm. It is difficult to determine whether the problem is NP-complete, and the differences between the easy-to-solve problems and NP-complete problems are usually very small.
How to judge whether the problem is NP-complete problem:
? The algorithm runs very fast when there are fewer elements, but the speed becomes very slow as the number of elements increases.
? Problems that involve "all combinations" are usually NP-complete problems.
? It is not possible to divide the problem into small problems, you must consider the various possibilities. This can be a NP-complete problem.
? If the problem involves a sequence (such as a city sequence in a traveler's question) and is difficult to solve, it may be a NP-complete problem.
? If the problem involves a collection (such as a broadcast station collection) and is difficult to resolve, it may be a NP-complete problem.
? If the problem can be converted to a collection coverage issue or a travel business problem, then it is definitely NP-complete.
Dynamic planning knapsack problem
A thief, backpack capacity of 4, the store has three items can be stolen, stereo 3000 weight 4, computer 2000 block weight 3, guitar 1500 block weight 1.
Try again and again, Time is O (2**n), this method can certainly use NP algorithm, but is not the optimal solution.
The dynamic programming solves the sub-problem first, and then solves the big problem gradually.
Each dynamic programming algorithm starts with a grid, and the knapsack problem grid is as follows.
The first line is the guitar line, you can only choose not to take the guitar, can only take the other will certainly steal ah, so that the benefit maximization.
The second line is the speaker line, you can choose the guitar or speaker.
The third line of computer lines, three kinds can be selected.
Here the order of the rows varies with no effect on the result. and the optimal solution may not be filled.
However, dynamic planning works only when each sub-problem is discrete, that is, independent of other sub-problems.
K Nearest Neighbor algorithm
Extracting a thing feature and scoring a horizontal ordinate on an axis is equivalent to abstracting the points in the space, and then using the Pythagorean formula to calculate the distance from the other points to determine which points are more similar.
The distance between Priyanka and Morpheus is 24, so it is possible to draw Priyanka's preference closer to Justin than to Morpheus's conclusion. This will give Priyanka a recommended film based on Justin's preferences.
Regression
Suppose you want not only to recommend a film to Priyanka, but also to predict how many points she will make to the film. To do this, first find out with her more recent people, you ask these people to hit the average, the result is 4.2. This is the return (regression).
You will use KNN to do two basic tasks-classification and regression:
? Classification is grouping;
? Regression is the prediction result (such as a number).
Compared to the distance calculation, we usually use cosine similarity in our work to score more accurate and common.
KNN algorithm is widely used in machine learning field. OCR refers to optical character recognition (optical character recognition), which means that you can take photos of a printed page and the computer will automatically recognize the text.
Use KNN.
(1) Browse a large number of digital images to extract the characteristics of these numbers.
(2) When you encounter a new image, you extract the feature of the image and find out who the nearest neighbor is!
The OCR algorithm extracts the features of line segments, points, and curves. When new characters are encountered, the same characteristics can be extracted from them.
This is only a small part of the programming algorithm, in the back there are many advanced algorithms waiting for us, for some of the code in this article, if you do not understand his running process can use the debug step by step deduction, the algorithm is a very central part of programming, your code of excellent degree and your thinking has a great relationship, Hope that the beginner Python programming can also have a good way of thinking to solve the problems encountered, because reading this book is relatively simple, reading is also very fast, so there may be some problems, I hope that the great God to criticize
Category: Python
Programming ideas and algorithms