Common algorithms and Design patterns

Source: Internet
Author: User

The same problem can be solved by different algorithms, and the quality of an algorithm will affect the efficiency of the algorithm and even the program. The purpose of the algorithm analysis is to select the suitable algorithm and the improved algorithm.

In computer science, the time complexity of an algorithm is a function that quantitatively describes the time it takes to run the algorithm. This is a function of the length of the string representing the input value of the algorithm. Time complexity is often expressed in large o notation, excluding the lower order and first coefficients of the function. In this way, the time complexity can be called asymptotic, and it examines the situation when the input value is approaching infinity.

Defined

In computer science, the time complexity of an algorithm is a function that quantitatively describes the time it takes to run the algorithm. This is a function of the length of the string representing the input value of the algorithm. Time complexity is often expressed in large o notation, excluding the lower order and first coefficients of the function.

Complexity of the algorithm

The complexity of the algorithm is divided into time complexity and space complexity. Its function: Time complexity refers to the computational effort required to execute the algorithm, while space complexity refers to the memory space required to execute the algorithm. (The complexity of the algorithm is reflected in the amount of resources required to run the algorithm, the most important thing in computer resources is time and space (that is, register) resources, so complexity is divided into time and space complexity).

Complexity of Time

1. In general, the number of times the basic operation of the algorithm is repeated is a function f (n) of module N, so the time complexity of the algorithm is recorded: T (n) =o (f (n))

Analysis: With the increase of the module n, the growth rate of the algorithm execution time is proportional to the growth rate of f (n), so the smaller the F (n), the lower the complexity of the algorithm, the higher the efficiency of the algorithm.

2. When calculating the time complexity, first find the basic operation of the algorithm, then determine its execution times according to the corresponding statements, and then find the same order of magnitude of T (N) (its same order of magnitude has the following: 1,log (2) n,n,n log (2) n, n Squared, n three, 2 N, n!), When found, f (n) = The order of magnitude, if T (n)/f (n) the limit can be obtained a constant C, then the time complexity T (n) = O (f (n))

Example: algorithm:

Then there are three squares of t (n) = n squared +n, according to the same order of magnitude in parentheses, we can determine that n is three times the same order of magnitude of T (N)

Then there is an F (n) = N of three of the square, and then the limit according to T (n)/f (n) can be obtained constant c

The time complexity of the algorithm: T (n) = O (n^3) Note: N^3 is the 3-th square of N.

3. Easy to understand in Pascal, it is easy to calculate the method is: To see a few for the loop, only one heavy time complexity O (n), the second is O (n^2), and so on, if there are two points O (Logn), two points for example, fast power, two-point lookup, If a For loop sets a two-point, then the time complexity is O (NLOGN).

Common sort

Name

Complexity of

Description

Note

Bubble sort
Bubble Sort

O (N*n)

Consider the elements to be sorted as "bubbles" that are vertically arranged, smaller elements lighter and thus upward

Insert Sort

Insertion Sort

O (N*n)

Remove elements from each other in the sequence of sequenced elements, and then scan them in the appropriate position

Initially, the ordered sequence of elements is empty

Select sort

O (N*n)

The smallest element is first found in the unordered sequence, placed at the start of the sort sequence, and then continues to look for the smallest element from the remaining unsorted elements and then to the end of the sort sequence. recursively.

Quick Sort

Quick Sort

O (n *log2 (n))

First select the middle value, then put smaller than it on the left, large on the right side (the specific implementation is to find from both sides, find a pair after exchange). The process is then used separately on both sides (recursion).

Heap Sort Heapsort

O (n *log2 (n))

A sort algorithm constructed using the data structure of the heap (heaps). A heap is an approximate complete binary tree structure that satisfies the heap property at the same time: that is, the child node's key value or index is always less than (or greater than) its parent node.

Approximate complete binary tree

Hill sort

SHELL

O (N1+£)

0<£<1

Select a step (step), and then sort by the cell with the step in intervals. Recursively, the step size becomes smaller until it is 1.

Box sorting
Bin Sort

O (N)

Set up a number of boxes, the key is equal to K records are loaded into the K-box (allocation), and then sequentially by serial number of non-empty boxes connected together (collected).

One of the allocation sorts: the order is implemented through the assign and collect procedures.

Bubble sort

The basic concept of a bubbling sort (bubblesort) is to compare the adjacent two numbers in turn, place decimals in front, and large numbers behind. That is, in the first trip: first compare the 1th and 2nd numbers, put the decimals before the large number. Then compare the 2nd and 3rd numbers, place the decimal before the large number, and then continue until you compare the last two numbers, put the decimals before the decimal, and put the large number behind.

The bubbling sort process ends with the first trip, putting the biggest number at the end. In the second pass: The comparison is still starting from the first logarithm (because the 1th number is no longer less than the 2nd number due to the 2nd and 3rd number of interchanges), before the decimal, the large number is placed, it is compared to the second-to-last number (which is already the largest in the penultimate position), the second trip ends, Get a new maximum number in the penultimate position (actually the second largest number in the whole sequence). So go on, repeat the above process until the final sort is finished.

Since the decimal is always placed forward in the sorting process, the large number is placed backwards, which is the equivalent of bubbles rising, so called bubble sort.

With the double loop, the outer loop variable is set to I and the inner loop variable is set to J. If there are 10 numbers that need to be sorted, the outer loop repeats 9 times, and the inner loop repeats 9,8,...,1 times in turn. Each of the two elements of the comparison is related to the inner loop j, they can be identified by a[j] and a[j+1], the value of I sequentially,..., 9, for each of the values of the i,j in turn,... 10-i.

#!/usr/bin/env python3#-*-coding:utf-8-*-# version:python3.5.0import random, Timedef bubble_sort (array):    "' From small to large bubbles, each time the land cycle puts the largest result into the last ' for    I in Range ' (len (array)):     # Cycle, the number of loop list elements for        J in range (len (array)-i-1): # Small loop, each time from the first element to compare two number of sizes            if ARRAY[J] > array[j+1]:                array[j], array[j+1] = array[j+1], array[j]    # put the biggest element to the last C7/>if __name__ = = ' __main__ ':    array = [Random.randint (1, 1000000) for I in range (50000)]    start_time = Time.time ()    bubble_small_to_big (array)    end_time = Time.time ()    total_time = end_time-start_time    print (' Start time:%s, end time:%s, total time spent:%s seconds '%          (start_time, End_time, Total_time))
‘‘‘
Starting Time: 1464748778.018224,
End time: 1464749177.4116025,
Total time taken: 399.39337849617 seconds
‘‘‘
Insert Sort

There is an ordered sequence of data that requires inserting a number in the already-sorted data sequence, but the data sequence is still ordered after insertion, and a new sorting method is used-insert sort, the basic operation of inserting a sort is to insert a data into the ordered data that is already sorted, In order to obtain a new, number plus one ordered data, the algorithm is suitable for the ordering of a small amount of data, the time complexity of O (n^2). is a stable sorting method. The insertion algorithm divides the array to be sorted into two parts: the first part contains all the elements of the array, except for the last element (where an array has more space to insert), and the second part contains only that element (that is, the element to be inserted). After the first part is sorted, the last element is inserted into the first part of the sequence.

The basic idea of inserting a sort is that each step inserts a record to be sorted, at the size of its key value, into the appropriate position in the previously sorted file until all is inserted.

#!/usr/bin/env python3#-*-coding:utf-8-*-# version:python3.5.0import time, Randomdef insertion_sort (array): for    I in range (1, len (array)):  # Default first number sorted        position = i    # record current index position        p_value = array[i]  # Record the value        of the current position While position > 0 and P_value < array[position-1]:  # Compare to the size of the previous number            array[position] = array[position-1]     # move the previous value backward one position position-            = 1   # move forward one position        array[position] = p_value   # put P_value in the right place if __name__ = = ' __main__ ':    array = [Random.randint (1, 1000000) for I in range (50000)]    # array = [12, 33, 3, 41, 8, 42, 74, 1 , 6, []    start_time = time.time ()    insertion_sort (array)    end_time = Time.time ()    total_time = End_ Time-start_time    Print (array[:100])    print (' Start time:%s, end time:%s, total time spent:%s seconds '%          (start_time, End_time, Total_time)    "    start time: 1464753187.1083782,    end time: 1464753348.6856112,    Total time spent: 161.577233076096 seconds    "
Select sort

The

Select sort (Selection sort) is a simple and intuitive sorting algorithm. It works by selecting the smallest (or largest) element of the data element to be sorted each time, storing it at the beginning of the sequence until all the data elements to be sorted are exhausted.   Select sort is an unstable sort method (such as sequence [5, 5, 3] The first time the first [5] is exchanged with [3], causing the first 5 to move to the second 5 behind).

#!/usr/bin/env python3#-*-coding:utf-8-*-# version:python3.5.0import random, Timedef seletion_sort (array):    "' Select Sort method from small to large, each cycle puts the smallest index value to the front ' for    I in Range ' (len (array)):     # Cycle, the number of loop list elements        smallet_index = i   # The default minimum index is I for        J in range (I+1, Len (array)): # small loop, each comparison array[i] and Array[j] Two number size            if array[i] > array[j]:                Smallet_index = J     # record min index number        array[i], array[smallet_index] = Array[smallet_index], array[i]  # Exchange Data  if __name__ = = ' __main__ ':    # Generates a data list of 50,000 random numbers    array = [Random.randrange (1000000) for I in range (50000)]        Start_time = Time.time ()    seletion_sort (array)    end_time = Time.time ()    total_time = End_time-start_time    print (' Start time:%s, end time:%s, total time spent:%s seconds '%          (start_time, End_time, total_time))    '    Start time: 1464512429.3297048,    end time: 1464512563.4953785,    total time spent: 134.165673732758 seconds    "
Quick Sort

Set to sort the array is a[0] ... A[n-1], the first arbitrary selection of data (usually the first number of the array) as the key data, and then all the smaller than the number of it in front of it, all the larger than its number is placed behind it, this process is called a fast sort of a trip. It is important to note that fast sorting is not a stable sorting algorithm, which means that the relative position of multiple identical values may change at the end of the algorithm

Note: In the file to be sorted, if there are more than one record of the same keyword, the relative order between the records with the same key is maintained, the sorting method is stable, and if the relative order of the records with the same keyword changes, it is said that this sort method is unstable.
It is important to note that the stability of the sorting algorithm is for all input instances. That is, in all possible input instances, if an instance makes the algorithm not satisfy the stability requirement, the sorting algorithm is unstable.

The Sort demo example assumes that the user has entered the following array:
Subscript 0 1 2 3 4 5
Data 6 2 7 3 8 9
Create a variable i=0 (point to the first data), j=5 (point to the last data), k=6 (assigns the value of the first data). We're going to move all the numbers smaller than K to the left of K, so we can start looking for a smaller number than 6, starting with J, from right to left, constantly diminishing the value of the variable J, we find the first subscript 3 of the data is smaller than 6, so the data 3 is moved to subscript 0 position, the subscript 0 of the data 6 to subscript 3, the first comparison is completed:
Subscript 0 1 2 3 4 5
Data 3 2 7 6 8 9
I=0 j=3 K=6 Then, to start the second comparison, this time to become a bigger than the K, and to go after the search. Sliding scale variable I, found that the data of subscript 2 is the first larger than the K, so with the subscript 2 of the data 7 and J points to the subscript 3 of the data of 6 to exchange, the data state into the following table:
Subscript 0 1 2 3 4 5
Data 3 2 6 7 8 9
i=2 j=3 K=6 says the above two comparisons are a cycle. Then, the variable J is decremented again, repeating the above cycle comparison. In this example, we do a loop and find I and J "Meet": they all point to subscript 2. So, the first time the comparison is over. The results are as follows, usually K (=6) the number on the left is smaller than it, the number of K right is larger than it:
subscript 1 3 4 5
data 3 2 7 9
If I and J do not meet, then sliding scale I find large, not yet, and then diminishing J to find small, so repeated, continuous circulation. Attention to judgment and search is carried out at the same time. Then, the data on both sides of the K, and then the process is divided into the above, until no further grouping. Note: The first time quick sort does not directly get the final result, only the number smaller than K and K is divided into the two sides of K. To get the final result, you need to perform this step again on an array of subscript 22 edges, and then decompose the array until the array is no longer decomposed (only one data) to get the correct result.  
#!/usr/bin/env python3#-*-coding:utf-8-*-# version:python3.5.0import time, Randomdef quick_sort (array, left, right):    If left >= right: # when I=J, the data is already relatively complete, which returns the sorted array return True base = array[left] # Select array First data as cardinality i = Left   j = Right While I < j: # Looking for a number smaller than base base, greater than base, J minus one, looking forward while I < J and Array[j] >= base:            # I&LT;J This condition must be added to prevent J from decreasing to negative numbers, such as base is the smallest case J-= 1 # above the while is not established, indicating that a smaller data than base if I < j: Array[i] = array[j] # Assigns the found number to Array[i] # print (' \033[31m to the left for sorted array:%s\033[0m '% array, I, j) # Find one from the left            A number larger than base base, less than base, I plus one, looking back while I < J and Array[i] < base: # I&LT;J This condition must be added, to prevent the value of I greater than J, such as base is the largest case        i + = 1 # The upper while is not true, indicating that a data larger than base if I < j:array[j] = Array[i] # assigns the number found to Array[j] # print (' Go to right to find sorted array:%s '% array, I, j) # When I=j, description this time the match is complete array[i] = base # Place base base in position of I index # divides the data indexed as I into two halves , the number on the left is smaller than array[i] and the right is more than arrAy[i] Large, then recursively call Quick_sort (array, left, i-1) quick_sort (array, j+1, right) if __name__ = = ' __main__ ': array = [ran Dom.randrange (1000000) for I in range (50000)] # randomly generates 50,000 data # array = [2, 9,,, 4,], start_time = Tim E.time () quick_sort (array, 0, Len (array)-1) End_time = Time.time () print (' Sort Top 30 data:%s '% array[:30]) # print ( ' Sorted data:%s '% array ' print (' Total elapsed time:%s seconds '% (end_time-start_time))
Characteristics and definitions of binary tree traversal tree
A tree is an important nonlinear data structure, visually, it is the structure of data elements (called nodes in a tree) organized by branches, much like the trees in nature. Tree structure is widely existed in the objective world, such as the Genealogy of human society and various social organization organizations can be expressed in tree image. Tree is also widely used in the field of computer, such as when compiling the source program, the tree can be used to represent the syntax structure of the source program. As in the database system, the tree structure is also one of the important organizational forms of information. All problems with hierarchical relationships can be described by a tree.

A tree is a collection of elements. Let's introduce the tree in a more intuitive way. The following data structure is a tree:

The tree has multiple nodes (node) to store elements. There is a certain relationship between some nodes, which is indicated by the line, and the connection is called Edge (edge). The upper node of the edge is called the parent node, and the lower end is called the child node. The tree is like a root that is constantly forked.

Each node can have more than one child node (children), and the node is the parent of the corresponding child node (parent). For example, 3,5 is a child node of 6, 6 is the parent of 3,5, 1,8,7 is a child of 3, and 3 is the parent of 1,8,7. The tree has a node that has no parent node, called the root node (root), 6. A node that has no child nodes is called a leaf node (leaf), than the 1,8,9,5 node in. You can also see that the tree above has a total of 4 levels, 6 in the first layer, and 9 in the fourth floor. The maximum level of nodes in a tree is called depth. In other words, the depth of the tree (depth) is 4.

If we start looking down from node 3 and ignore the rest. So what we see is a tree with node 3 as the root node:

A triangle represents a tree

Further, if we define an isolated node that is also a tree, the original tree can be represented as a relationship between the root node and the subtree (subtree):

These observations actually give us a strict way to define trees:

1. A tree is a collection of elements.

2. The collection can be empty. There are no elements in the tree, we call the tree empty tree.

3. If the collection is not empty, then the collection has one root node, and 0 or more subtrees. The root node is connected to the root node of its subtree with one edge (edge).

The 3rd above is to define the tree recursively, that is, the tree itself (subtree) is used in the process of defining the tree. Due to the recursive nature of the tree, many tree-related operations can also be implemented conveniently using recursion. We'll see in the back.

The implementation of the tree

The tree has given a memory implementation of the tree: each node stores elements and multiple pointers to child nodes. However, the number of child nodes is indeterminate. A parent node may have a large number of child nodes, and another parent node may have only one child node, and the tree's additions and deletions node operations will cause the number of child nodes to change further. This uncertainty can lead to a lot of memory-related operations, and easy to create memory waste.

A classic implementation is as follows:

Memory implementation of the tree

Each of the two nodes that have the same parent node is the sibling node (sibling). Implementation, each node contains a pointer to the first child node, and another pointer to its next sibling node. In this way, we can represent each node in a uniform, deterministic structure.

The computer's file system is the structure of the tree, as described in the background knowledge of Linux file management. In UNIX file systems, each file (the folder is also a file) can be viewed as a node. Non-folder files are stored on leaf nodes. The folder has pointers to the parent and child nodes (in Unix, the folder also contains a pointer to itself, which differs from the tree we see above). In Git, there is a similar tree structure to express the entire file system version changes (refer to version Management Kingdoms).

Two fork Tree

A binary tree is a finite set consisting of n (n≥0) nodes and an ordered tree with a maximum of two subtrees per node. It is either an empty set, or consists of a root and two disjoint two-tree trees called the left and right subtrees.

Characteristics:

(1) binary tree is an ordered tree, even if there is only one subtree, it must distinguish between the left and right subtree;

(2) The degree of each node of the binary tree can not be more than 2, can only take 0, 1, 23 one;

(3) There are 5 kinds of nodes in the binary tree: empty nodes, nodes with no left and right sub-trees, only Zuozi nodes, only the nodes of the left subtree, and the nodes with the subtree.

Binary tree (binary) is a special kind of tree. A binary tree can have up to 2 child nodes per node:

Two-fork Tree

Because the number of sub-nodes of the binary tree is determined, it can be implemented in memory directly by the way. Each node has a left children and a right child node (starboard children). The left Dial hand node is the root node of the Zuozi, and the right child node is the root node of the right subtree.

If we add an extra condition to the binary tree, we can get a special binary tree called the binary search tree. Binary search Tree Requirements: Each node is not smaller than any element of its left subtree, and is no larger than any element of its right sub-tree.

(If we assume that there are no duplicate elements in the tree, the above requirements can be written as follows: Each node is larger than any node in its left subtree and smaller than any node in its right subtree)

Binary search tree, notice the size of the elements in the tree

Binary search tree can easily implement the search algorithm. When searching for element x, we can compare the x and the root node:

1. If x equals the root node, then find x, stop the search (termination condition)

2. If x is a small root node, then search Zuozi

3. If x is greater than the root node, then search the right subtree

The binary search tree requires the most number of operations to be equal to the depth of the tree. The depth of the two-fork search tree for n nodes is at most n, with a minimum of log (n).

Traversal of a binary tree

Iterates through all nodes of the tree and accesses it only once. According to the location of the root node is divided into the pre-sequence traversal, the middle sequence traversal, post-order traversal.

Pre-sequence traversal: The right subtree, left dial hand tree, root node

Middle Sequence traversal: the right subtree, the root node, left dial hand tree

Post-post traversal: root node, right subtree, left dial hand tree

For example: Three traversal of the tree below is asked

Pre-sequence traversal: ABDEFGC

Middle Sequence Traversal: DEBGFAC

Post-post traversal: EDGFBCA

Binary tree Type (1) complete binary tree-if set two fork tree height is H, in addition to the H layer, the other layers (1~h-1) of the nodes have reached the maximum number, the H layer has leaf nodes, and leaf nodes are from left to right sequentially arranged, this is a complete binary tree. (2) Full two fork tree--except the leaf node each node has left and right cotyledons and leaf nodes are at the bottom of the two fork tree. (3) Balanced binary tree-balanced binary tree is also known as the AVL tree (different from the AVL algorithm), it is a binary sorting tree, and has the following properties: It is an empty tree or its left and right two sub-tree height difference of absolute value of not more than 1, and about two sub-trees are a balanced binary tree

How to tell if a tree is a complete binary tree? By definition,

The textbook says: A depth of k, the number of nodes is 2^k-1 two tree is full two fork tree. This concept is well understood,

It is a tree with a depth of k and no vacancy.

First, the full two-fork tree is numbered in the order of the breadth-first traversal (left-to-right).

A tree with a depth of k, with N nodes, and then the tree is numbered, and if all the numbers correspond to a full two fork tree, then this tree is a complete binary tree.

How to judge a balanced binary tree?

(b) The height of the Tuzok number on the left is 3, the right subtree is 1, and the difference is over 1.

(b) On the right, figure 2 of the Zuozi height is 0 The height of the right subtree is 2, the difference is more than 1

#!/usr/bin/env python3#-*-coding:utf-8-*-# version:python3.5.0class TreeNode (object): Def __init__ (self, data=none, Left=none, right=none): self.data = Data Self.left = left self.right = Rightclass BTree (object): D  EF __init__ (Self, root=none): Self.root = root def preorder (self, TreeNode): "' Pre-sequence traversal: root node--left dial hand tree--right subtree "If TreeNode = = None:return Print (treenode.data, end=") Self.preorder (Treenode.left ) Self.preorder (treenode.right) def inorder (self, TreeNode): "Middle sequence traversal: Left dial hand tree--root node--right subtree" if t Reenode = = None:return Self.inorder (treenode.left) print (Treenode.data, end= ") Self.ino             Rder (treenode.right) def postorder (self, TreeNode): "' post-post traversal: Left dial hand tree--right subtree--root node ' If TreeNode = None: Return Self.postorder (Treenode.left) self.postorder (treenode.right) print (Treenode.data, End= ') if __name__ = = '__main__ ': N1 = TreeNode (data= ' A ') n2 = TreeNode (' B ', n1,none) n3 = TreeNode (' C ') N4 = TreeNode (' D ') N5 = Tr     Eenode (' E ', n3,n4) N6 = TreeNode (' F ', n2,n5) N7 = TreeNode (' G ', N6) n8 = TreeNode (' H ') root = TreeNode (' I ', N7,N8) BT = BTree (root) print (' Pre-sequence traversal result: ') Bt.preorder (bt.root) print (' \ n-sequence traversal result: ') Bt.inorder (bt.root) print (' \ n Post-order traversal results: ') Bt.postorder (bt.root) "'" ' pre-sequence traversal results: I G F b A e C d H Middle sequence traversal result: a B f C E D G I h post traversal result: a B C D E F G H I '
Heap Sort Hill Sort Box sort

Common algorithms and Design patterns

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.