Algorithm (Python), algorithm python

Source: Internet
Author: User
Tags floor division

Algorithm (Python), algorithm python

An algorithm is a specific and effective operation step to solve a problem.

The complexity of the algorithm indicates the code running efficiency. It is represented by an uppercase O braces, such as O (1) and O (n)

The complexity of the algorithm is gradual. For an input with n size, if the calculation time is n3 + 5n + 9, the complexity of the algorithm is n3.

Recursion

Recursion is called in a function. In most cases, this puts more pressure on the computer, but it is sometimes useful. For example:

Tower games

Move the plate of column A to column C at least a few times. A large plate can only be under a small plate.

Recursive Implementation:

Def hanoi (x, a, B, c): # Move all plates from a to c if x> 0: hanoi (x-1, a, c, B) # step1: except for the largest below, the remaining plates are moved from a to B print ('% s-> % s' % (a, c) # step2: the largest tray moves from a to c hanoi (x-1, B, a, c) # step3: Move the remaining tray from B to chanoi (10, 'A', 'B ', 'C') # Count def h (x): num = 1 for I in range (x-1): num = 2 * num + 1 print (num) h (10)
Print the Fibonacci series recursively
def fei(n):    if n == 0:        return 0    elif n == 1:        return 1    else:        return fei(n-1)+fei(n-2)

You will find that even if n is only dozens of times, your computer's memory usage has soared.

In fact, if you combine the generator, you will find that no matter how big n is, there will be no choppy, but this is a feature of the generator. This blog does not focus on introduction.

#Combined GeneratorDef fei (n): pre, cur = 0, 1 while n> = 0: yield pre n-= 1 pre, cur = cur, pre + curfor I in fei (400000 ): print (I)

 

Python has a limit on the number of recursion times. You can use the sys module to modify the number of recursion times.

import syssys.setrecursionlimit(1000000)

 

 

Search1. Sequential search

This is not to mention, that is, for loop, time complexity O (n)

def linear_search(data_set, value):    for i in range(len(data_set)):        if data_set[i] == value:            return i    return

 

2. Binary Search

Time complexity O (logn)

Half the search result. Check whether the target value is half on the left or half on the right. Then, replace the Left or Right endpoint and continue to judge.

Non-recursive version:

def binary_serach(li,val):    low = 0    high = len(li)-1    while low <= high:        mid = (low+high)//2        if li[mid] == val:            return mid        elif li[mid] > val:            high = mid-1        else:            low = mid+1    else:        return None
  Binary Search for Recursive versions
def bin_search_rec(data_set, value, low, high):    if low < high:        mid = (low + high) // 2        if data_set[mid] == value:            return mid        elif data_set[mid] > value:            return bin_search_rec(data_set, value, low, mid - 1)        else:            return bin_search_rec(data_set, value, mid + 1, high)    else:        return None

 

 

Sort

Three slow processes:

1. Bubble Sorting

The principle is that the list is adjacent to two numbers. If the front side is smaller than the back side, the order is switched. After sorting, the maximum number is at the beginning of the list.

Code:

def bubble_sort(li):    for j in range(len(li)-1):        for i in range(1, len(li)):            if li[i] > li[i-1]:                li[i], li[i-1] = li[i-1], li[i]    return li

The worst case of Bubble sorting, that is, the interaction order every time. The time complexity is O (n2)

One of the best scenarios is that the list is originally sorted, so you can add an optimization and a flag. If there is no switching order, return directly.

# Optimized version of bubble def bubble_sort_opt (li): for j in range (len (li)-1): flag = False for I in range (1, len (li )): if li [I]> li [I-1]: li [I], li [I-1] = li [I-1], li [I] flag = True if not flag: return li
2. Insert sort

  Principle: divides the List into two parts: ordered area and unordered area. The first ordered area has only one element. Select an element from the unordered area and insert it to the position of the ordered area until the unordered area becomes empty.

Def insert_sort (li): for I in range (1, len (li )): tmp = li [I] j = I-1 while j> = 0 and tmp <li [j]: # Find a suitable position to insert li [j + 1] = li [j] j-= 1 li [j + 1] = tmp return li

The time complexity is O (n2)

 

3. Select sort

  Principle: traverse the list once, put the smallest value in the first position of the list, find the smallest value in the remaining list, and put it in the second position ....

Def select_sort (li): for I in range (len (li)-1): min_loc = I # suppose the index with the smallest value is I for j in range (I + 1, len (li): if li [j] <li [min_loc]: min_loc = j if min_loc! = I: # If the min_loc value is exchanged, it indicates that the subscript of the smallest value is not I, but min_loc li [I], li [min_loc] = li [min_loc], li [I] return li

The time complexity is O (n2)

 

 

Fast sorting:

4. Fast sorting (fast sorting)

Principle: Let the specified Element return to the location where it should be put (the element with left changes is smaller than him, and the element on the right is larger than him ), then, the sorting is completed after each element is returned.

You can refer to this animation to understand the following code.

Code:

# Normalization function def partition (data, left, right): # point the left and right elements to the two ends respectively. tmp = data [left] # assign the first element on the left to tmp, at this time, left points to the empty while left <right: # The left and right pointers do not overlap, and the while left <right and data [right]> = tmp: # right points to an element greater than tmp, right-= 1 # shift right to the left one data [left] = data [right] # If right points to less than tmp, put it to the left-side position that is now empty while left <right and data [left] <= tmp: # If left points to less than tmp, if left + = 1 # left moves a data [right] = data [left] To the right without switching, if left points to an element greater than tmp, switch to the right data [left] = tmp # Put the value obtained at the beginning, put it at the position where the left and right overlap, return left # return this position at the end # Write the homing function, you can call this function recursively to implement sorting def quick_sort (data, left, right): if left <right: mid = partition (data, left, right) # locate the position of the specified Element quick_sort (data, left, mid-1) # Sort the elements on the left quick_sort (data, mid + 1, right) # Sort the elements on the right return data

In normal cases, the complexity of fast sorting is O (nlogn)

The worst case is that the list cannot be divided into two parts each time the bitwise is returned. In this case, the complexity is O (n2). To avoid designing this worst case, instead of getting the first number, you can retrieveRandom Number

 

5. Merge Sorting

Principle: The list is divided into two segments in order, divided into each element, and then merged into an ordered list. This operation is called a merge.

When sorting is applied, a list is divided into one element and one element. An element is ordered. An ordered list is merged one by one and finally merged into an ordered list.

  

 

Figure:

 

Code:

Def merge (li, left, mid, right): # A merge process, combine Two ordered lists separated from mid into an ordered list I = left j = mid + 1 ltmp = [] # Compare the elements of the two lists in sequence, put the while I <= mid and j <= right: if li [I] <li [j]: ltmp. append (li [I]) I + = 1 else: ltmp. append (li [j]) j + = 1 # If the two lists are not evenly divided, an element is not added to the temporary list, so judge the while I <= mid: ltmp. append (li [I]) I + = 1 while j <= right: ltmp. append (li [j]) j + = 1 li [left: right + 1] = ltmp return lidef _ merge_sort (li, left, right ): # When only one element is subdivided into a list, the merge function is called every time to become an ordered list if left <right: mid = (left + right) // 2 _ merge_sort (li, left, mid) _ merge_sort (li, mid + 1, right) merge (li, left, mid, right) return lidef merge_sort (li ): return (_ merge_sort (li, 0, len (li)-1 ))

As shown in the preceding example, the time complexity is O (nlogn)

In particular, Merge Sorting also has a space complexity of O (n ).

 

6. Heap sorting

Put this at the end, because it is the most troublesome. Putting the most troublesome at the end is a kind of performance that is responsible for the work.

If you want to sort the heap, you must first understand the 'tree '.

Tree

Tree isData Structure;

A tree consists of n nodes.Set; --> If n is 0, it isEmpty treeIf n> 0, there is one node asRoot NodeOther nodes can be divided into m sets, and each set itself is a tree.

Some concepts that may be used:

Root Node: The first node of the tree without a parent node

Leaf node: nodes without forks

Tree depth (height): the number of layers divided

Child node and parent node: Relationship Between Nodes

Figure:

 

Binary Tree

Then there is a binary tree on the basis of the tree. A binary tree is the tree structure of each node with a maximum of two subnodes. For example:

 

Full Binary Tree: Except for leaf nodes, all nodes have two children, andThe depth of all leaf nodes is the same

Full Binary Tree: it is derived from a full binary tree. If the depth of the binary tree is k, the number of nodes in each previous layer reaches the maximum except for the k layer, that is, there is no null position, in addition, the subnodes at Layer k are also concentrated in the left subtree (sequence)

 

Binary Tree Storage

There are chained storage and ordered storage methods (list). This article only discusses the methods of sequential storage.

Thoughts:

What is the relationship between the numbers of the parent node and the left child node? 0-1 1-3 2-5 3-7 4-9 I ----> 2i + 1

What is the relationship between the numbers of the parent node and the right child node? 0-2 1-4 2-6 3-8 4-10 I -----> 2i + 2

 

Now let's take a look at the heap. it's troublesome to talk about the heap. I will write the heap, stack, and other data structures in another blog. In this article, we will first discuss what is related to sorting.

Heap

A heap is a special type of tree that requires the parent node to be greater than or less than all child nodes.

Big root Stack: A Complete Binary Tree that allows any node to be larger than its child node.

Xiaogen heap: A Complete Binary Tree that allows any node to be smaller than its child node

 

 

Heap adjustment:When the left and right subtree of the root node are heap, you can change it into a heap by one downward adjustment.

The so-called downward adjustment is to find the value of the heap top down to a suitable position. It is to find the value of the position at a time and exchange it with him. It is also necessary to find a suitable position.

"The content written by the browser is not saved and lost, so I don't want to write it again ..."

 

Heap sorting process

  1. Create a heap

2. Obtain the heap top element, which is the largest element.

3. Remove the heap top and place the last element of the heap to the top of the heap. At this time, you can adjust the heap order once.

4. The heap top element is the second largest element.

5. Repeat Step 3 until the heap is empty.

 

The heap construction process:

 

 

One-to-one process:

Code:

 

Def sift (li, left, right): # left and right indicate the element range, which is the range from the root node to the right node, and then compare the size of the root node and the two children, put the big ones on the top of the heap # It doesn't matter the size of the two children, because we only need to get the elements on the top of the heap. # construct the heap I = left # As the root node j = 2 * I + 1 # The parent node and the left child root node mentioned above. the relationship tmp = li [left] while j <= right: if j + 1 <= right and li [j] <li [j + 1]: # Find the bigger j = j + 1 if tmp <li [j]: # if the bigger one is bigger than the root node, in this case, li [I] = li [j] I = j # treats the node that is switched as the root node, and loops the preceding Operation j = 2 * I + 1 else: break li [I] = tmp # If exchange occurs above, the current I is the root node that meets the conditions (no need to change) at the last layer. def heap_sort (li ): n = len (li) for I in range (n // 2-1,-1,-1 ): # create a heap n // 2-1 to get the number of the root node of the last subtree, and then go forward, finally, go to the root node 0 // 2-1 =-1 sift (li, I, n-1) # fix the position of the last value as right, because right is only used to determine whether recursion should not exceed the current tree, the last value can satisfy # If each tree is traversed, find its right child, too much trouble for I in range (n-1,-1,-1): # random numbers li [0], li [I] = li [I], li [0] # swap the heap top with the last number. To save space, you can create a new list and put the heap top (maximum number) in the new list, 0, I-1) # The list at this time, should exclude the last sorted, placed in the maximum position, so I-1

Time complexity is also O (nlogn)

To expand, if you want to obtain the top 10 of a list, that is, the top 10 of the list. How can this problem be solved?

You can use a heap to construct a small root heap by taking the top 10 of the heap and traversing the number next to the List in sequence. If it is smaller than the heap top, ignore it. If it is larger than the heap top, replace the heap with the change element, and then make a downward adjustment. In the end, this small root heap is the top 10.

In fact, Python comes with a heapq module, which helps us operate on the heap.

Heapq Module

Each element in the queue has a priority. The element with the highest priority gives priority to services (Operations). This is the priority queue, which is usually implemented by heap.

If heapq module is used to implement heapq sorting, it is much simpler:

import heapqdef heapq_sort(li):    h = []    for value in li:        heapq.heappush(h,value)    return [heapq.heappop(h) for i in range(len(h))]

If you want to get the top 10, just use one method.

heapq.nlargest(10,li)

 

These three fast sorting methods are all done. The fast sorting method is the fastest. Even so, it is not as fast as the sort method that comes with Python.

Next we will introduce two types of sorting: Hill sorting and counting sorting.

7. Hill sorting

Hill sorting is a kindGroup insertion sortingAlgorithm

Ideas:

First, take an integer d1 = n/2, divide the elements into d1 groups, and the distance between each group of adjacent elements is d1. insert and sort the elements in each group directly;

Take the second integer d2 = d1/2 and repeat the group sorting process until di = 1, that is, all elements are directly inserted and sorted in the same group.

Hill sorting does not make certain elements orderly, but keeps the overall data closer and closer. The last sorting makes all data orderly.

 

Figure:

Code:

Def shell_sort (li ):
Gap = int (len (li) // 2) # Initially, the list is divided into two groups, but each group has a maximum of two elements. The first group may have three elements.
While gap> 0:
For I in range (gap, len (li )):
Tmp = li [I]
J = I-gap
While j> 0 and tmp <li [j]: # Each number in each group is compared with the one in front of it, and the smaller one is in front of it.
Li [j + gap] = li [j]
J-= gap
Li [j + gap] = tmp
Gap = int (gap // 2) # The floor division in Python3 is also a float type
Return li

We can also see through diamante that, in fact, the hill sorting is very similar to the insert sorting. The insert sorting can be seen as the hill sorting with a fixed interval of 1, hill sorting divides insertion into groups. In the same group, two adjacent numbers do not have a difference of 1, but a difference of gap.

Time Complexity: O (1 + t) n), where t is a number greater than 0 and less than 1, depending on the gap method, when gap = len (li) // at the time of 2, t is about 0.3

 

8. Count sorting

Requirement: there is a list where the numbers in the list are between 0 and 100 (integers) and the list length is about 1 million. The design algorithm sorts the list within the O (n) time complexity.

Analysis: The list is very long, but the data volume is small, there will be a large number of duplicate data. You can consider sorting these 100 numbers.

Code:

Def count_sort (li): count = [0 for I in range (101)] # based on the original question, the integer 0-for I in li: count [I] + = 1 I = 0 for num, m in enumerate (count): # The enumerate function returns a data object that can be traversed (such as a list, tuples, or string) it is combined into an index sequence and lists both the data and the data subscript, which is generally used in the for loop. For j in range (m): li [I] = num I + = 1

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.