The classic algorithm part of the path of algorithm

Source: Internet
Author: User

The classic algorithm part of the path of algorithm
    • The author of this book, Heng Hengming, has another book, the string of data structures, and the philosophical principles of the operating system, are very good books.
    • This book can be regarded as easy to read, writing is very good. The author joins a lot of his own thinking
    • This article contains the classic algorithm section
Tenth chapter Order and order
  • Insert Sort
    • Extracting an ordered part from an unordered part
    • Sort in situ. No need to occupy temporary storage space
    • The optimal case is O (n). Average O (n^2)
  • Binary Insert Sort
    • Use two-point lookup when inserting
  • Merge sort
    • Divide and conquer, from the middle decomposition, respectively sorted after the careful merger
    • Extra space required for remote sorting
    • N>=30 performance is better than insert sort.

      Degree of complexity fixed to O (Nlog (n))

  • Quick Line
    • Divide and conquer, the complex part lies in decomposition. and merging is complicated.
    • Sort in situ
    • The worst case is O (n^2), but only if it is not always the worst, the complexity is not n^2 and resilient
  • Whatever the comparison, the decision tree height is at least nlog (n)
  • Count sort
    • Element value range must be limited
    • High degree of space complexity
    • O (N)
  • Base sort
    • From the lowest to the highest order, each bit is sorted by a stable sort, such as counting sort
    • A sort should select Log (n) bits. Lowest overall cost
  • Bucket sort
    • The n elements are sorted by value into N buckets, and the inside of each bucket is inserted in a sort order. Connect the first buckets to each other
    • The elements should be evenly distributed
  • High-speed order selection: Ask for the number of k large
    • Using the partition of a fast line
    • Worst O (n^2). Average O (n)
  • Linear worst-order selection
    • Sets the element every 5 groups. Take the median value separately. Find median in N/5 value, as pivot of partition
    • why * not every 3 groups?
    • Ensure that the right side of pivot is at least 3N/10 elements
    • Worst O (n)
11th Chapter Search and Hash
  • Sequential search
    • In the sequence, the search frequency is assumed to decrease from beginning to end. Then O (1)
  • Binary search
    • For ordered sequences, O (Logn)
  • Constant Search: Hash search
      • Direct hash: very easy , there will be no collisions, space waste large
      • Division (modulo division) hash
        • element to hash table size m modulo get
        • m must be a prime number, otherwise uneven scattering is caused. For example, M includes factor D, while most elements are equal to the D remainder
        • m cannot be near the power of 2. If M is a power of 2, the hash result will not depend on the full bit of the element. Close also not, why ?
      • multiplication hash
        • h (k) = (A * k)% 2^r >> (w-r). W is a computer word width, A is an odd
        • exponentiation: N times (normally takes n=2), with intermediate R bits
  • Open addressing hashing: in-depth expansion during hash collisions, adding a linked list
    • Average Search Time is O (1+a), A is load factor
  • Closed addressing hashing: When a hash collision is found for an element there is also a position
    • Something else there is a place to operate called Quest
    • Linear Quest
      • h(k,i) = (h‘(k) + i) % m, h ' (k) for home
      • Look for unoccupied positions in one Direction
      • Easy to appear top-level aggregation
    • Nonlinear exploration
      • Square exploration h(k,i) = (h‘(k) + c1 * i + c2 * i^2) % m prone to secondary aggregation
    • Double Hash Quest
      • Constructs a new hash function using two hash functions H1, H2
      • h(k,i) = (h1(k) + i * h2(k) ) mod m
    • Pseudo-Random Search
      • Using pseudo-random sequences
      • There is a secondary aggregation
    • The number of unsuccessful searches is expected to be1/(1-a)
    • The most successful search hits1 / a * ln( 1/(1-a))
    • An enclosing hash cannot delete an element. Able to put markers to solve. Assuming that the insertion is sparse compared to the search, it is possible to solve the vacancy problem again by hashing
  • Randomize Hash
    • A set of hash functions was found. Randomly select a different hash function at a time
    • Used to avoid the aggregation effect in extreme cases of a single hash function serious
    • Full Domain Hash
      • A set of H hash functions that map arbitrarily two different elements to the same location with a number of functions of h/m
  • Perfect Hash
    • n Elements. Constructs a hash list of size m=o (n), making the search at worst up to O (1)
    • Using a double-layer hash, the first-level size n, the second level of each table is the size of the number of elements falling to the first position I on the square
    • Space Consumption is O (n)
12th Chapter Shortest Path
  • If there is a negative ring in the diagram, there is no shortest path
  • Single source multi-point Shortest path
    • Dijkstra algorithm
      • Greedy algorithm. No negative path required
      • Optimal substructure: Each segment in the shortest path is the shortest path between two points
      • Greedy Select attribute: The next node that extends out of the path is the node near the source point
      • Each time you select a node that is near the source point, update the distance from all adjacent nodes of this node
      • The time complexity is O (v^2). Implemented in a heap. Can reach O (E log (V)).

        Same as the prim algorithm

    • Bellman-ford algorithm
      • Ability to handle negative weights
      • Make V-1, update all sides of the graph every time
      • Complexity of O (VE)
    • BFS
      • Cases where the weights of each edge are equal
      • O (V+e)
  • Multi-source multi-point Shortest path
    • floyd-warshall algorithm
      • dynamic programming algorithm
      • c_ijk = Min{c_ij (k-1), C_ik (k-1) + ckj (k-1)}|k
      • complexity O (n^3)
    • Jonhson algorithm
      • The equivalent transform is a graph with no negative weights, using the Dijkstra algorithm
      • Add a node s, to the full point path length of 0, execute the bellman-ford algorithm, assign a value to the node
      • Execute the Dijkstra algorithm for each node
      • The complexity is mainly Dijkstra algorithm operation, O (VE + v^2 log (V))
      • This method cannot be used if the Bellman-ford algorithm reports a negative ring presence

  
  

Reprint please specify FOCUSTC, blog address for HTTP://BLOG.CSDN.NET/CAOZHK, the original link for Click Open
  
  

The classic algorithm part of the path of algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.