Hazelcast is an open source embeddable data Grid (Community edition free, Enterprise Edition fee). You can think of it as a memory database, but it's a little different from a memory database like Redis. Project Address: http://hazelcast.org/
Hazelcast makes it easier for Java programmers to develop distributed computing systems and provides a number of distributed implementations of Java interfaces, such as: Map, Queue, Executorservice, Lock, and Jcache. It provides services in the form of a JAR package, relies only on Java, and provides Java, C + +,. NET, and REST clients, making it easy to use.
1 ImportCom.hazelcast.config.Config;2 ImportCom.hazelcast.core.Hazelcast;3 ImportCom.hazelcast.core.HazelcastInstance;4 5 ImportJava.util.concurrent.ConcurrentMap;6 7 Public classDistributedmap {8 Public Static voidMain (string[] args) {9Config config =NewConfig ();TenHazelcastinstance h = hazelcast.newhazelcastinstance (config); Oneconcurrentmap<string, string> map = H.getmap ("My-distributed-map"); AMap.put ("Key", "value"); -Map.get ("Key"); - the //concurrent Map Methods -Map.putifabsent ("Somekey", "somevalue"); -Map.replace ("Key", "value", "NewValue"); -} +} -
How to store data
Hazelcast services are end-to-end, with no master-slave points, so there is no single point of failure. All nodes in the cluster store equal amounts of data and perform an equal amount of computation.
Hazelcast the data into 271 zones by default. This value can be configured on the hazelcast.partition.count。
system property for a given key, the partition number corresponding to the key is obtained after the sequence number, hash, and the total number of partitions are modulo. All partitions are distributed in an equal amount to all nodes in the cluster, and the corresponding backups for each partition are also distributed in the cluster.
The following example is a hazelcast cluster with 2 nodes:
The black font represents the partition, and the blue font represents the backup. Node 1 stores 1 to 135 partitions, which are backed up in Node 2 at the same time. Node 2 stores 136 to 271 partitions and backs up in Node 1.
At this point, if you add 2 nodes to the cluster, hazelcast a single move partition and back up to the new node, so that the cluster data is distributed evenly.
Note that the actual partitioning is not a sequential distribution, but rather a random distribution, the above example is just for ease of understanding. It is important to understand hazelcast evenly distributed partitioning and backup.
Hazelcast uses a hashing algorithm for data partitioning. For a given key (such as map) or object name (such as topic and list):
- Serialize this key or object name to get a byte array.
- Hashes a byte array.
- The value after modulo is the partition number.
Each node maintains a partitioned table that stores the corresponding relationship between the partition number and the node. So that each node knows how to get the data.
Re-partitioning
The oldest node in the cluster (or the first to start) is responsible for periodically sending the partitioned table to other nodes. This allows all nodes to update the partition table if a node joins or leaves the cluster.
Note: if the oldest node hangs, the secondary node will take over the task.
This timed task interval can be configured with hazelcast.partition.table.send.interval。
a default value of 15 seconds for system properties.
The re-partitioning takes place in:
- Node joins the cluster.
- Node leaves the cluster.
At this point the oldest node updates the partition table, distributes it, and then the cluster starts moving the partition or recovering the partition from the backup.
How to use
There are two ways of doing this: embedded and client servers.
- Embedded, the Hazelcast server's jar package is imported into the host application, and the server starts and exists in each host application. The advantage is that data access can be less deferred.
- The client server, the Hazelcast client's jar package is imported into the host application, and the server jar package runs independently of the JVM. The advantages are easier to debug and more reliable performance, and most importantly, better extensibility.
Hazelcast Introduction and use