HBase System Architecture

Source: Internet
Author: User
Tags compact file info hadoop mapreduce

HBase is a database of Apache Hadoop that provides random, real-time read and write access to large data. The goal of HBase is to store and process large data. HBase is an open-source, distributed, multi-version, column-oriented storage model. It stores loosely-shaped data.

HBase Features:

1 High reliability

2 efficiency

3 Column-oriented

4 Scalable

5 large-scale structured storage clusters can be built on inexpensive PC server

HBase is an open source implementation of Google BigTable , which corresponds to the following:

Google HBase
File Storage System Gfs Hdfs
Massive data processing MapReduce Hadoop Mapreduce
Collaborative service Management Chubby Zookeeper

HBase diagram:

HBase is located in the structured storage layer, around HBase, and the various parts support hbase:

Hadoop Parts Role
Hdfs High-reliability underlying storage support
Mapreduce High-performance computing power
Zookeeper Stable service and failover mechanism
Pig&hive High-level language support for easy data statistics
Sqoop Provides RDBMS data import for traditional database migration to HBase

Interface for accessing HBase

Way Characteristics Occasion
Native Java API Most conventional and efficient Hadoop MapReduce Job handles hbase table data in parallel
HBase Shell The simplest interface HBase Management uses
Thrift Gateway Support multiple languages with thrift serialization Heterogeneous Systems access HBase table data online
Rest Gateway Remove language restrictions Rest-style HTTP API access
Pig Pig Latin 60 programming language processing data Data statistics
Hive Simple Sqllike

HBase Data Model

Component Description:

Row Key:table primary key row key table records sorted by row key
Timestamp: The timestamp for each data operation, which is the version number of the data
Column Family: A table in a horizontal direction with one or more column families, a column cluster can be composed of any number of columns, the column cluster supports dynamic expansion, without the predefined number and type, binary storage, the user needs to do type conversion

Table&region

1. As the record grows, the table will automatically split into multiple splits and become regions

2. A region represented by [Startkey,endkey]

3. Different region will be managed by master assigned to the corresponding regionserver

Two special tables:-root-&. META.

.   META. Record the region information for the user table, and. META. Can also have multiple region

-root-Records. META. Table region information, however,-root-has only one region

The location of the-root-table is recorded in the zookeeper

The process by which clients access data:

-root-, Zookeeper, Client. META. User Data Sheet

Multiple network operations, but client side has cache caching

HBase System Architecture Diagram

Constituent Parts Description

Client:

Use the HBase RPC mechanism to communicate with Hmaster and hregionserver, client and Hmaster for management class operations, and client and hregionserver for data read and write operations.

Zookeeper:

Zookeeper Quorum store-root-table address, hmaster address; Hregionserver to Ephedral to Zookeeper, hmaster at any time to perceive the health of each hregionserver , zookeeper avoid hmaster single point problem.

Hmaster:

Hmaster there is no single point of issue, HBase can start multiple hmaster, with the zookeeper Master election mechanism to ensure that there is always a master running. Mainly responsible for the management of table and region:

1 Manage users to change the table and delete the operation

2 Manage Hregionserver load Balancing, adjust region distribution

3 Region split, responsible for the distribution of the new region

4 after Hregionserver outage, responsible for failure hregionserver on region migration

Hregionserver:

The most core module in HBase, primarily responsible for responding to user I/O requests, reads and writes data to the HDFs file system.

Hregionserver manages some column hregion objects, and each hregion corresponds to a region,hregion in a table consisting of multiple hstore;

Each hstore corresponds to the storage of a column family in the table; column family is a centralized storage unit, so it is more efficient to put a column with the same IO attribute in a column family

Hstore:
The core of hbase storage. Made up of Memstore and StoreFile. Memstore is sorted Memory Buffer. The process by which a user writes data:


Client write-in Memstore, until Memstore full, flush into a storefile, until a certain threshold is increased to start.

Compact merge operations, multiple StoreFile merged into a single storefile, with version merging and data deletion, when the Storefiles compact is gradually becoming increasingly large storefile- When a single storefile size exceeds a certain threshold, the split operation is triggered, and the current region is split into 2 region,region to be rolled down, The new split of the 2 children region will be hmaster assigned to the corresponding hregionserver, so that the original 1 region of the pressure can be diverted to 2 region.

As a result of this process, hbase only adds data, and the resulting update and delete operations are all done in the compact phase, so the user writes only need to go into memory to return immediately, thus ensuring I/O is high performance.

HLog

Introduce Hlog reason:

In a distributed system environment, there is no way to avoid system errors or outages, and once Hregionserver exits, the memory data in Memstore is lost and the introduction of Hlog is to prevent this.

Working mechanism:

There is a Hlog object in each Hregionserver, Hlog is a class that implements the write Ahead log, and each time a user action is written to Memstore, a copy of the data to the Hlog file is written, the Hlog file is periodically scrolled out of the new, and the old file is deleted ( Persisted to data in storefile). When the hregionserver unexpected termination, hmaster through zookeeper sense, hmaster first processing the legacy hlog files, the different region of the log data split, respectively, placed in the corresponding region directory, Then redistribute the failed region, pick up the hregionserver of these region in the process of load region, will find that there is a history hlog need to deal with, so will replay Hlog data into Memstore, Then flush to Storefiles to complete the data recovery.

HBase storage Format

All data files in HBase are stored on the Hadoop HDFs file system in two main formats:

1 hfile hbase keyvalue data storage format, hfile is a hadoop binary format file, in fact StoreFile is the hfile to do a lightweight packaging, that is storefile the bottom is hfile

2 HLog file,hbase in the storage format of the Wal (Write Ahead Log), which is physically the sequence File of Hadoop

hfile


Picture explanation:

The hfile file is variable length with only two fixed blocks: trailer and FileInfo. The pointer in trailer points to the starting point of the other data block, and some meta-information is recorded in file info, for example: Avg_key_len, Avg_value_len, Last_key, COMPARATOR, Max_seq_id_key, and so on.

The data index and Meta index blocks record the starting point for each data block and meta block. Data block is the basic unit of HBase I/O, in order to improve efficiency, there is an LRU-based block cache mechanism in Hregionserver, and the size of each data chunk can be specified by parameters when creating a table. The large block facilitates sequential scan, and small blocks facilitate random queries. Each data block in addition to the beginning of the magic is a keyvalue stitching, magic content is some random numbers, the purpose is to prevent data corruption.

Each keyvalue pair inside the hfile is a simple byte array. This byte array contains many items and has a fixed structure.


Keylength and Valuelength: Two fixed lengths representing the length of key and value respectively.

Key section: Row length is a fixed-length value that represents the length of the Rowkey, and Row is Rowkey. Column Family length is a fixed-length number that represents the lengths of the Family.

The next is column Family, followed by qualifier, then two fixed-length values representing time stamp and key Type (Put/delete). The value section does not have such a complex structure, which is purely binary data.

HLog File


The Hlog file is an ordinary Hadoop Sequence file,sequence The key is the Hlogkey object, the Hlogkey records the attribution information written to the data, in addition to table and region names, but also includes Sequence number and Timestamp,timestamp are "write Time", the starting value of sequence is 0, or the last time the file system was deposited in sequence.

The value of HLog sequece file is the KeyValue object of HBase, which corresponds to KeyValue in hfile.

Original: http://blog.chedushi.com/archives/9723

HBase System Architecture

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.