Distributed Cache Component Hazelcast

Source: Internet
Author: User
Tags hazelcast

Hazelcast is an open-source, distributed memory implementation of Java, which has the following features:

    1. Provide java.util. Distributed implementation of {Queue, Set, List, Map}
    2. Provides a distributed implementation of Java.util.concurrent.ExecutorService
    3. Provides a distributed implementation of Java.util.concurrency.locks.Lock
    4. Provides distributed topic for publish/subscribe (subject)
    5. Through JCA and Java EE container integration and transactional support
    6. Provides distributed Multimap for a one-to-many relationship
    7. Provides implementation of distributed events and listeners
    8. Provides cluster applications and cluster membership mechanisms
    9. Support Dynamic HTTP Session cluster
    10. Monitor and manage clusters with JMX
    11. Provides a level two cache for Hibernate provider
    12. Provides a dynamic clustering mechanism
    13. Provides dynamic partitioning and backup mechanisms
    14. Supports dynamic failure recovery
    15. Simple integration with JAR packages
    16. Fast running speed.
    17. Small size

Hazelcast Architecture Diagram

Hazelcast topology

Hazelcast topologies are available for distributed clusters. Two modes of operation are supported. In "Embed member" mode, the JVM containing the application code joins the Hazelcast cluster directly, and in the case of "client plus member" mode, the standby JVM (possibly on the same or different host) is responsible for joining the Hazelcast cluster. The two topology methods are as follows:

Embedding mode:

Client Plus member mode:

In most cases, you can take the client plus member topology approach, because it provides greater resiliency in terms of the cluster mechanism-the member JVM can be disassembled and restarted without any impact on the overall application, because the Hazelcast client will only have to reconnect to another member of the cluster. Another argument is that the client-side Plus member topology method is able to isolate application code for purely cluster-level events.

Hazelcast Data Partitioning

The data shards in the Hazelcast are called partition. By default, Hazelcast has 271 partition. With a key to a data, you can use the serialized hash algorithm to map the data to a specific partition. Partitions are evenly distributed in the memory of the cluster member. and multiple replicas are guaranteed to be highly available.

The following shows how nodes increase partition distribution:

Single node:

When a single node is started, the node is master and has a default of 271 partitions.

Two nodes

Start the second node, and the second node is added to the cluster created by the first node. The partitions in the first node are redistributed and distributed evenly across the cluster. And each member of the cluster has a copy of the data shard.

Join more nodes

When adding more nodes, the data shards in the original node will be re-distributed to the newly added nodes to ensure the uniform distribution of the data shards in the cluster. Data is also backed up. The following is a data shard distribution for 4 node clusters:

Partitioning algorithm

The hash algorithm is used for hazelcast data shards. When given a map key or the name of a distributed object, the key or object name is first serialized, converted to the form of a byte array, then a byte array is hashed, and the hash result is modulo the partition number to get the Shard ID.

Partition table

A partitioned table is generated when the first node is created. A partitioned table is a correspondence between a storage partition and a node. The role is to allow each node in the cluster to know the partition and data information. The first node that is started will periodically send this partitioned table to the other nodes in the cluster. This allows each node in the cluster to get the latest partition information when new nodes are added to the cluster or when nodes are deleted. If the first node (master) fails, the cluster selects the new master (the second initiated node), and the new master sends the partitioned table to the nodes in the cluster. The time period of the transmission can be set by the Hazelcast.partition.table.send.interval system property, which is 15s by default.

Distributed Cache Component Hazelcast

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.