Principle design of consistent hashing algorithm

Source: Internet
Author: User
Tags rehash value store

Source: Zhi Welcome to share the original to Bole headlines

a . Preface

The consistent hash (consistent Hashing), first proposed by MIT's Karger in 1997, is primarily used to address the service shocks caused by outages and expansions in volatile distributed web systems. Now the idea of the algorithm has been widely used, and in practice has been a great development.

two . Algorithm Design 1. source of the problem

A service consisting of 6 servers, each server responsible for storing 1/6 of the data, and when the service is re-usable when the Server1 is down.

As can be clearly seen in the table below, when the Server1 outage, HASH1 service is completely unavailable, so need to rehash by the remaining 5 machines to provide all the data services, but because each machine responsible for the size of the data segment is not the same, you need to migrate data between different servers, And the service is unavailable until the data migration is complete.

2. Classic Consistent hashing algorithm

Aiming at the disadvantage of rehash, Karger proposes an algorithm, the core of which is "virtual node".

Map all the data into a set of virtual nodes that are larger than the number of servers, and then map the virtual nodes to the real servers. So when the server goes down, because the number of virtual nodes is fixed, all do not need to rehash, and only need to re-migrate the virtual nodes that the service is not available, so that only the data of the outage node needs to be migrated.

In the classic algorithm, the next real node of the outage server will provide the service.

three . Algorithm Improvements 1. problems with the classic consistent hashing algorithm

The classic algorithm only solves the flaw of the rehash algorithm, when itself is not perfect. There are several main issues:

(1) Server1 downtime leads to Server2 service being subjected to one-fold data service, and if Server1 is retired, the overall system load is completely unbalanced.

(2) If all the server can withstand the data read and write, then if under normal circumstances all the data write two copies to different servers, the main standby or load balancer, down to read the backup node data directly, there is no need to appear in the classic algorithm data migration.

2.Dynamo Improvement Practices

Amazon's big Data storage platform "Dynamo" uses a consistent hash, but it does not use the classic algorithm, but instead uses the idea of a failed node rehash.

The system saves the correspondence of all virtual nodes and real servers to a configuration system, and when the services of some virtual nodes are unavailable, reconfigure the services of these virtual nodes to other real servers, so as not to migrate the data in large quantities, and to ensure that the load of all servers is relatively balanced.

Virtual node 0-4/5 10-14/6 15-19/7 20-24/8 24-29/9
Recovery Server0 Server2 Server3 Server4 Server5
Four . algorithm Extension

The consistency hashing algorithm is used to solve the problem of server outage and capacity expansion, but the idea of "virtual node" has been developed, and some distributed systems are used to realize load balancing and optimal access strategy of the system.

In a real system scenario, two systems with the same deployment may not provide the same service, mainly because:

(1) The individual hardware differences result in different server performance.

(2) The network communication efficiency between IDC servers is different because of the computer room switch and network bandwidth.

(3) The service performance of telecom IDC and China Unicom IDC can be different with different network operators.

(4) The server is in the network or computer room encountered an attack.

So the exact same two sets of systems may also need to provide differentiated services, through the use of virtual nodes can be flexibly adjusted dynamically to achieve the optimization of system services.

For a distributed system consisting of 2 nodes and 3 servers per node, s0-1 is the SERVER0 of Distributed System 1, the system configuration administrator can dynamically adjust the mapping relationship between virtual node and real server according to the real service efficiency of the system. It is also possible for the customer system to adjust its own access policy depending on the response rate or response time.

Virtual node 0-2 3-4 5-7 8-9 10-12 13-14
Server S0-1 S0-2 S1-1 S1-2 S2-1 S2-2
Five . Reference

(1) consistent hash (wiki)
(2) Consistent hashing
(3) Dynamo:amazon ' s highly Available key-value Store

Principle design of consistent hashing algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.