Data structures and algorithms

Source: Internet
Author: User

Data Structure

A list of linked lists is a set of linear data consisting of nodes, each pointing to the next node through a pointer. It is a data structure that is composed of nodes and can be used to represent sequences. single-linked list : Each node points to the next node only, and the last node points to null (NULL). doubly linked list : Each node has two pointer p,n. P points to the previous node, n points to the next node, and the last node points to null. Loop list : Each node points to the next node, and the last node points to the first node. Time complexity: Index: O (n) find: O (n) Insert: O (1) Delete: O (1)

The stack is a collection of elements that supports two basic operations: push is used to press elements into the stack, and pop is used to delete the top elements of a stack. LIFO data structure (last in first out, LIFO) time complexity index: O (n) find: O (n) Insert: O (1) Delete: O (1)

A queue queue is a collection of elements that supports two basic operations: Enqueue is used to add an element to a queue, and dequeue is used to delete an element in the queue. FIFO data structure (first in. Time complexity index: O (n) find: O (n) Insert: O (1) Delete: O (1)

The tree is a non-circular diagram of the non-direction and unicom.

two fork Tree Two The tree is a tree-shaped data structure that can have up to two child nodes per node, called the left Dial hand node and the right child node. Full Tree Two: two each node in the tree has 0 or 2 child nodes. Perfect binary tree (Perfect binary): Two each node in the tree has two child nodes, and the depth of all the leaf nodes is the same. Complete binary Tree : Two the number of nodes in the tree is the maximum, except for the last layer, and the last node is continuously concentrated on the leftmost.

two fork find Tree two fork find Tree (BST) is a binary tree. The value of any of its nodes is greater than or equal to the value in the left subtree, less than or equal to the value in the right sub-tree. Time complexity index: O (log (n)) find: O (log (n)) Insert: O (log (n)) Delete: O (log (n))

A dictionary tree, also known as a cardinality tree or a prefix tree, is a lookup tree that stores a dynamic collection or associative array of key values as strings. The node in the tree does not directly store the association key value, but the node's position in the tree determines its associated key value. All child nodes of a node have the same prefix, and the root node is an empty string.

A tree array, also known as a binary index tree (binary Indexed tree,bit), is conceptually a tree, but is implemented as an array. The subscript in the array represents the nodes in the tree, and the subscripts of the parent or child nodes of each node can be obtained by bitwise operations. Each element in the array contains the sum of the pre-computed interval values, which are also updated during the entire tree update process. Time complexity interval summation: O (log (n)) Update: O (log (n))

segment Tree segment tree is a tree-shaped data structure for storing intervals and segments. It allows you to find the number of occurrences of a node in several segments. Time complexity interval lookup: O (log (n)) Update: O (log (n))

Heap Heap is a tree-based data structure that satisfies certain attributes: The key values of all parent-child nodes in the entire heap satisfy the same ordering criteria. The heap is divided into the largest heap and the smallest heap. In the maximum heap, the parent node's key value is always greater than or equal to the key value of all child nodes, and the key value of the root node is the largest. In the smallest heap, the key value of the parent node is always less than or equal to the key value of all child nodes, and the key value of the root node is minimal. Time complexity index: O (log (n)) find: O (log (n)) Insert: O (log (n)) Delete: O (log (n)) Delete max/min: O (1)

Hash hashes are used to map data of any length to fixed-length data. The return value of a hash function is called a hash value, hash code, or hash. If a different primary key gets the same hash value, a conflict occurs. Hash map: Hash map is a data structure that stores the relationship between key values. HashMap a hash function to convert a key into a bucket or subscript in a slot, making it easier to specify a value for the lookup. Conflict resolution Chain address method (separate Chaining): In the chain address method, each bucket (bucket) is independent of each other, and each index corresponds to a list of elements. The time to process the HashMap is the sum of the time (constant) of the lookup bucket and the time the list element is traversed. Open addressing: In an open address method, when a new value is inserted, the hash bucket for that value is determined to exist and, if present, the next possible location is selected according to an algorithm until an unoccupied address is found. An open address that is the location of an element is not always determined by its hash value.

The graph is an ordered pair of G = (V,e) that includes the set of vertices or nodes V and the set E of edges or arcs, where e includes two elements from V (that is, the edge is associated with two vertices, and the association is an unordered pair of the two vertices). the adjacency matrix of the graph is symmetric, so if there is a node u to the edge of node V, then the edge of node v to node U must also exist. forward Graph : the adjacency matrix of a graph is not symmetric. So if there is a node u to the edge of node V does not mean there must be a node V to the edge of the node U.

algorithm

Sort

Fast Sort Stability: No time complexity optimal: O (Nlog (n)) Worst: O (n^2) Average: O (nlog (n))

Merge sort Merge sort is a divide-and-conquer algorithm. The algorithm continuously divides an array into two parts, sorts the left dial hand array and the right sub-array, and then merges two arrays into a new ordered array. Stability: Time complexity: Optimal: O (Nlog (n)) Worst: O (Nlog (n)) Average: O (nlog (n))

bucket Sort bucket Sorting is a sort algorithm that points elements into a certain number of buckets. Each bucket is sorted by another algorithm, or by a recursive call bucket. Optimal time complexity: Ω (n + k) Worst: O (n^2) Average: Θ (n + k)

The cardinality sort Cardinal sort resembles the bucket sort, distributing the elements into a certain number of buckets. The difference is that the cardinality sort does not allow each bucket to be sorted by itself after splitting the element, but instead it does the merge operation directly. Optimal time complexity: Ω (NK) Worst: O (NK) Average: Θ (NK)

Graph Algorithm

Depth-First search Depth-First search is a graph traversal algorithm that traverses child nodes without backtracking. Time complexity: O (| v| + | e|)

Breadth-First search breadth-First search is a graph traversal algorithm that traverses a neighbor node rather than a child node. Time complexity: O (| v| + | e|)

Topological sort topology ordering is a linear sort of a graph node. For any one node u to the edge of Node V, the subscript of U precedes v. Time complexity: O (| v| + | e|)

Dijkstra Algorithm Dijkstra algorithm is an algorithm to find the shortest path of single source in a graph. Time complexity: O (| V|^2)

Bellman-ford Algorithm Bellman-ford is an algorithm for finding the shortest path of a single source point to another node in a weighted graph. Although the time complexity is greater than the Dijkstra algorithm, it can handle graphs that contain negative edges. Time complexity: Optimal: O (| e|) Worst: O (| v| | e|)

Floyd-warshall Algorithm The Floyd-warshall algorithm is an algorithm for finding the shortest path between arbitrary nodes in a loop-free weighted graph. The algorithm executes once to find the shortest path between all nodes (path weights and). Time complexity: Optimal: O (| V|^3) Worst: O (| v|^3) Average: O (| V|^3)

The minimum spanning tree algorithm is a greedy algorithm for finding the minimum spanning tree in the non-weighted graph. In other words, the minimum spanning tree algorithm can find the smallest subset of edges that connect all nodes in one diagram. Time complexity: O (| V|^2)

Kruskal Algorithm The Kruskal algorithm is also a greedy algorithm for calculating the minimum spanning tree, but in the Kruskal algorithm, the graph is not necessarily connected. Time complexity: O (| E|log| v|)

greedy algorithm greedy algorithm always make the best choice at present, and hope the final whole is also optimal. The problem that can be solved by using greedy algorithm must have the following two kinds of characteristics: the optimal solution of the optimal sub-structure problem contains the optimal solution of its sub-problem. Greedy choice of each step can get the whole optimal solution of the problem. Example-coin selection question the sum of the given desired coins is V, and n coins, that is, the type is I of the coin total coinvalue[i] points, I the range is [0...n–1]. Suppose that each type of coin has an infinite number of coins to solve for the minimum required for the V-score. Coins: pennies (1 cents), nickel (5 cents), dime (10 cents), one-fourth (25 cents). Suppose that the sum of V is 41. We can use the greedy algorithm to find the most significant coin that is less than or equal to V, and then subtract the value of the coin from the V, so repeat. V = 41 | Used 0 Coins V = 16 | 1 coins Used (41–25 = +) V = 6 | 2 coins Used (16–10 = 6) V = 1 | 3 coins Used (6–5 = 1) V = 0 | 4 coins Used (1–1 = 0)

bitwise operations are techniques that operate at the bit level. The use of bit arithmetic can result in faster running times and smaller memory usage. Test K-bit: S & (1 << k); Set K-bit: s |= (1 << k); Close K-bit: s &= ~ (1 << k); Switch K-bit: s ^= (1 << k); Times 2n:s << N; divided by 2n:s >> N; Intersection: S & T; Set: S | T Subtraction: S & ~t; Extract minimum non-0 bits: S & (-s); Extract minimum 0 bits: ~s & (S + 1); Exchange value: x ^= y; Y ^= x; x ^= y; Run-time analysis

Large o indicates that the large o represents the upper bound of an algorithm used to describe the worst case scenario.

Small o indicates that the small o is used to describe the progressive upper bound of an algorithm, both gradually approaching.

large Ω indicates that the large ω is used to describe the progressive lower bound of an algorithm.

small Ω indicates that the small ω is used to describe the progressive lower bound of an algorithm, which is gradually approaching.

thetaθ says Thetaθ is used to describe an algorithm's true bounds, including the minimum upper bound and the maximum lower bound.

thought it was over. No, this knowledge is not just about staying in the theory, but also the code implementation.

This is actually a repo:https://github.com/kdn251/interviews from GitHub.

In addition to the above algorithm and data knot knowledge, there are also recommended some algorithm practice website, video tutorials, interview, Google, Facebook and other well-known companies face questions and answer code. Download the instance code or the collection practice site. Enjoy

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.