Bigdata common algorithms and data structures of big data learning

Source: Internet
Author: User
Tags cas

1.Bloom Filter

Consists of a long binary vector and a series of hash functions

Advantage: can reduce IO operation, save space

Cons: Do not support deletion, there is a miscarriage


If you want to support the delete operation: Change to count filter


2.SkipList (Skip table)


Core ideas: Composed of multiple layers, each layer is an ordered list, the lowest layer contains all elements, the number of elements descending. Each node contains two pointers, one->, and one downward.


In parallel programming, you can use a lock or a CAS operation.

Cas:

Compare and swap, a mechanism that solves the performance loss caused by the use of locks in multithreaded parallel situations, the CAS operation consists of three operands-the memory location (V), the expected original value (A), and the new value (B). If the value of the memory location matches the expected original value, the processor automatically updates the location value to the new value. Otherwise, the processor does nothing. In either case, it will return the value of that location before the CAS directive. CAS effectively explains that "I think position v should contain a value of a; if it is included, B is placed in this position; otherwise, do not change the position, just tell me the current value of this position."

Inserts implemented with CAS:

void Insert (node *prev, node *node) {while (true) {Node->next = Prev->next;if (__sync_compare_and_swap (&prev-& Gt;next, Node->next, node) {return;}}}


The 3.LSM tree (log-structured merge-tree) significantly improves write performance by sacrificing partial read performance compared to B + trees. Objective: To change a large number of random writes to batch sequence write.
In memory to maintain a number of small and orderly structure, in the search for two points to traverse these structures, and constantly merge the small tree into a big tree, for bulk insertion. To optimize lookups, you can use the Bloom Filter. (Judging if there is no target data in the small structure)

4.HashTree is used to quickly locate a small amount of changes in large amounts of data to hash each item, and then hash, hash, and finally the top hash.

The 5.Cuckoo hash uses two hash functions H1 (x) and H2 (x), when x is inserted, the H1 (x) and H2 (x) are computed simultaneously, if any of the buckets are empty, the X is inserted in the appropriate position, if it is full, select a bucket to kick y off, put X, and perform the above procedure on Y. Set the maximum number of replacements, increase the quantity of buckets when the number is reached, or re-elect the hash function.


Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Bigdata common algorithms and data structures of big data learning

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.