HBase Online Data backup

Source: Internet
Author: User

Brief Introduction

An important improvement of hbase-0.90.0 is the introduction of the replication mechanism, which further safeguards its data integrity.
The replication mechanism of hbase is much like the MySQL statement-based replication. It is achieved through Waledit and Hlog. When the request is sent to master cluster, the Hlog log is placed into the replication queue while in HDFs, and the slave cluster is obtained by zookeeper and written to the slave table. The current version supports only one slave cluster.

HBase Replication

HBase replication is a way to replicate data in different hbase deployments. It can be used as a method of failure recovery and provides high availability at the hbase level.

The most basic schema pattern in hbase replication is "master Push", because each region server has its own Wal (or Hlog), so it is easy to save the location that is now being copied. As well-known solution –mysql master/slave replication, only binary files are used to track modifications. A primary cluster can replicate data to any number of slave clusters, and each region server will participate in replicating its own modifications.

The hlog from each regoin server is the basis for hbase replication, and they must be saved to HDFs as long as they need to replicate the data to the slave cluster. Each region server starts copying from the oldest log it needs, while saving the current recovery location in zookeeper to simplify error recovery. Each recovery from the cluster may be different, but the Hlog queue content they handle is the same.
The size of the cluster that participates in replication can be incorrect. The primary cluster distributes the load from the cluster as evenly as possible by randomly allocating it.

solve the problem:

Data management errors, irreversible DDL operations.
Bottom HDFs File block block corruption.
Excessive reading of data over a short period of time to the pressure of the cluster, increasing the server to deal with this situation is a waste of resources.
System upgrade, maintenance, diagnose problems will cause the cluster unavailability time to grow.
The atomicity of double writes is difficult to guarantee.
Some of the reasons for not being predictable. (such as engine room power outage, large-scale hardware damage, broken network, etc.)
The large delay effect of the off-line application of Mr Computing on online reading and writing.

Online Backup Scenario comparison

For the data Center data Redundancy backup scheme, currently from the consistency, transactional, latency, throughput, data loss, failover several angles to analyze the following scenarios.

Simple Backup Mode

This scheme can be implemented by snapshot or setting the timestamp to dump the data, and ensure the security of the data.
If the scheme is brief, the design is elegant and can be used to backup the data center of the online with low interference or non-disturbance.
The disadvantage of this solution is also obvious, but the security of the data before the point of time is guaranteed, if the occurrence of an emergency will lead to the inevitable loss of data for a whole period of time, for many people are unacceptable.

Master-slave mode (master-slave)

This mode is a lot more advantages than simple backup mode, can be guaranteed by the final consistency of data , data from the primary cluster to the standby cluster delay is low, the asynchronous write does not bring performance pressure on the main cluster, the basic does not produce much performance impact, unexpected events to the temporary data loss is very small, And the transaction of the primary cluster can be ensured in the standby cluster.
Generally through the construction of a better log system with a check point to achieve, can achieve read and write separation, the main cluster can serve as read and write services, but the standby cluster generally only assume the read service.

main main Mode (Master-master)

The principle is similar to the master-slave mode, the difference is that 2 clusters can bear each other to write the separation , can assume the read and write services. There is a lack of throughput.

2 Phase Commit

This scheme guarantees strong consistency and transactions, and the server returns to the client successfully, indicating that the data must have been successfully backed up without any data loss. Each server can assume read and write services.
However, the disadvantage is that the cluster delay is high and the overall throughput decreases.

Paxos Algorithm

Based on the strong consistency scheme implemented by the Paxos algorithm, the server with the same client connection can guarantee the consistency of the data.
The disadvantage is that complexity is achieved, and cluster latency and throughput become worse as the cluster servers increase.

Master-Slave mode replication work flow

Deployment Steps

1, the first to build two sets of hbase clusters.
2. Edit all the machines in the ${HBASE_HOME}/conf/hbase-site.xml master cluster
Add the following configuration:

<property><name>hbase.replication</name><value>true</value></property>

After the modification is complete, restart the HBase master cluster for the configuration to take effect.

3. Run the following command in the HBase Shell:

hbase(main):001:0>add_peer ‘ID‘ ‘CLUSTER_KEY‘hbase(main):002:0>start_replication

The first command is to set the zookeeper cluster information from the cluster, which allows the modifications to be synchronized to the slave cluster.
The second command actually publishes the modified record to the slave cluster. To ensure that the work works as expected, the user must ensure that a copy of the same table has been established from the cluster, the table can be empty, but must have the same schema and table name.

Note:
hbase-0.96 and hbase-0.98 have no start_replication command and
stop_replicationCommand. hbase-0.98 compared to hbase-0.96, new set_peer_tableCFs , show_peer_tableCFs command. When setting up replication, hbase-0.98 needs to use the set_peer_tableCFs settings. Specific help commands are available for reference.
ID must be a short integer, please refer to the following template for Cluster_key content:
hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent
Like whatzk.server.com:2180:/hbase

Note: If two clusters use the same Zookeepe cluster, you will have to use a different
zookeeper.znode.parentBecause they cannot be written to the same folder.

4. Once you have a peer (slave) cluster, you need to make replication available on your cluster, and to achieve this, you can execute the following command in the HBase Shell:

hbase(main):005:0‘your_table‘hbase(main):006:0‘your_table‘,{NAME=>‘family_name‘,REPLICATION_SCOPE=>‘1‘}hbase(main):007:0‘your_table‘

A scope value of 0 (the default) means that it will not be copied, and a scope value of 1 means that it will be copied.

5. Run the following command to list all configured peer (slave) clusters:

hbase(main):008:0>list_peers

6. Running the following command will make the peer (slave) cluster unavailable:

hbase(main):009:0>disable_peers ‘ID‘

After running the command, HBase will stop sending modifications to the peer (from) cluster, but it will keep track of all new wals files so that replication continues when it is available from the cluster.

7. You can run the following command to make the peer (slave) cluster that was previously unavailable is available:

hbase(main):010:0>enable_peer ‘ID‘

8. Run the following command to remove one from the cluster:

hbase(main):011:0>stop_replicationhbase(main):012:0>remove_peer ‘ID‘

It is important to note that stopping replication will still complete all the modifications that have been made to the queue, but all of the processing has been stopped. To make sure that your configuration is working, you can view the log files of any region server to see if there is something similar to the following lines:

1 rs,with0.11from# 010.10.1.49:62020

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

HBase Online Data backup

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.