first, what is HBase
HBase is a highly reliable, high-performance, column-oriented, scalable, distributed storage system that leverages HBase technology to build large, structured storage clusters on inexpensive PC servers.
HBase is an open-source implementation of Google Bigtable, and, like Google Bigtable using GFS as its file storage system, HBase uses Hadoop HDFS as its file storage system; Google runs mapreduce to handle BIGTA The massive data in BLE, HBase also uses Hadoop mapreduce to handle the massive data in HBase, and Google Bigtable uses chubby as a synergistic service, HBase uses zookeeper as its counterpart.
Two, HBase design model
Each of the tables in HBase is called BigTable. BigTable stores a series of row records with three basic types of definitions: row Key, Time Stamp, and Column.
1. Row Key is the unique identifier of the row in BigTable.
2. Time Stamp is the timestamp associated with each data operation and can be seen as an SVN version.
3, column is defined as < family>:< Label> Through these two parts you can specify a storage column for unique data, and the definition and modification of family requires a DB-like DDL operation for HBase, and the label does not need to define a direct To use, this also provides a means for dynamic custom columns . Family another role in physical storage optimized read and write operations, the data with the family is physically stored closer, so in the business design process can take advantage of this feature.
1. Logical Storage Model
HBase stores data in the form of a table. Tables are made up of rows and columns. The columns are divided into a number of column families (row family), as shown in.
The following is a detailed parsing of the elements in the table:
Row Key
Like NoSQL databases, row key is the primary key used to retrieve records. There are only three ways to access rows in HBase table:
1 access via a single row key
2 through the range of row key
3 Full table Scan
Row key line keys (row key) can be any string (the maximum length is 64KB, the actual application length is generally 10-100bytes), inside HBase, the row key is saved as a byte array.
When stored, the data is sorted by the dictionary order (byte order) of the row key. When designing a key, to fully sort the storage feature, put together the row stores that are often read together. (Positional dependency)
Attention:
1, the dictionary order to the int order result is 1,10,100,11,12,13,14,15,16,17,18,19,2,20,21,..., 9,91,92,93,94,95,96,97,98,99. To maintain the natural order of shaping, the row key must be left padded with 0.
2. One read and write of a line is an atomic operation (no matter how many columns are read or written).
Column Family
Each column in an hbase table is attributed to a column family. The column family is part of the Chema of the table (and the column is not) and must be defined before the table is used. Column names are prefixed with the column family. For example Courses:history, Courses:math belong to the courses family.
Access control, disk, and memory usage statistics are performed at the column family level. In practical applications, control permissions on the column family help us manage different types of applications: we allow some apps to add new basic data, some apps can read basic data and create inherited column families, and some apps will only allow browsing data (and maybe not even browsing all data for privacy reasons).
Time stamp
A storage unit identified by row and columns in HBase is called a cell. Each cell holds multiple versions of the same piece of data. The version is indexed by time stamp. The type of timestamp is a 64-bit integer. The timestamp can be assigned by HBase (automatically when the data is written), at which time the timestamp is the current system time that is accurate to milliseconds. Timestamps can also be explicitly assigned by the customer. If your application avoids data versioning conflicts, it must generate its own unique timestamp. In each cell, different versions of the data are sorted in reverse chronological order, that is, the most recent data is in the front row.
To avoid the burden of management (including storage and indexing) caused by too many versions of data, HBase provides two ways to recover data versions. The first is to save the last n versions of the data, and the second is to save the version for the most recent period (for example, the last seven days). Users can set them for each column family.
Cell
The only unit determined by {row key, column (= + ), version}. The data in the cell is of no type and is all stored in bytecode form.
2. Physical storage model
The Table is split into multiple hregion in the direction of the row, with each hregion scattered in different regionserver.
Each hregion is made up of multiple stores, each of which consists of a memstore and 0 or more storefile, each store a columns Family
The storefile is stored in HDFs in the hfile format.
three, HBase storage architecture
As can be seen from the HBase architecture diagram, the storage in HBase includes Hmaster, Hregionserver, Hregion, store, Memstore, StoreFile, hfile, Hlog, and so on, and the following is the HBase storage architecture diagram:
Each table in HBase is divided into multiple sub-tables (hregion) by a certain range of row keys, and by default a hregion of more than 256M is divided into two, which is managed by Hregionserver and Hregion is managed by Hmaster.
the role of Hmaster:
1. Allocate region for region server.
2, responsible for the region server load balancing.
3. Discover the failed region server and redistribute the region on it.
4, the garbage file on the HDFs recovery.
5. Process schema update request.
Hregionserver Effect:
1. Maintain master assigned to his region, processing IO requests to these region.
2, is responsible for the segmentation is in the process of changing the region of the too large.
As you can see, client access to data on HBase does not require master involvement (addressing access to zookeeper and region server, data read and write access to region server), Master only maintains the metadata information for table and region (table's metadata information is stored on zookeeper), and the load is low. Hregionserver When a child table is accessed, a Hregion object is created, and a store instance is created for each column family of the table, and each store has a memstore and 0 or more storefile corresponding to it. Each storefile will correspond to a hfile, hfile is the actual storage file. As a result, there are as many stores as a hregion number of column families. A hregionserver will have multiple hregion and a hlog.
Hregion
Table is separated into multiple region in the direction of the row. Region is the smallest unit of distributed storage and load balancing in HBase, where different region can be on different region servers, but the same region is not split across multiple servers.
Region is separated by size, and each table is typically only one region. As the data is continuously inserted into the table, the region grows, and when a column family of region reaches a threshold (default 256M), it is divided into two new region.
1, < table name, Startrowkey, create Time >
2. By Table of contents (-root-and. META.) Record the endrowkey of the region
Hregion Location: Region is assigned to which region server is completely dynamic, so the mechanism is needed to locate the region specific region server.
HBase uses a three-tier structure to locate region:
1, through the ZK file/hbase/rs get-root-table position. The-root-table has only one region.
2. Search through the-root-table. META. The location of the corresponding region in the first table of the table. In fact, the-root-table is. META. The first region;.meta of the table. Each region in the table is a row of records in the-root-table.
3, through. META. Table finds the location of the desired user table region. Each region of the user table is in the. META. The table is a row of records.
The-root-table will never be separated into multiple region, ensuring that up to three jumps are required to locate any region. The client caches the location information of the query, and the cache is not actively invalidated, so if the cache on the client is all invalidated, it needs to go back and forth 6 times to locate the correct region, three of which are used to discover cache invalidation, and three times to obtain location information.
Store
Each region is made up of one or more stores, at least one store,hbase will put the data that is visited in a store, that is, to build a store for each columnfamily, if there are several columnfamily, There are also several stores. A store consists of a memstore and 0 or more storefile. HBase determines whether the region needs to be sliced by the size of the store.
Memstore
Memstore is placed in memory. Saving the modified data is keyvalues. When the size of the memstore reaches a threshold (the default 64MB), Memstore is flush to the file, which generates a snapshot. Currently hbase will have a thread responsible for the flush operation of the Memstore.
StoreFile
Memstore the data in memory is written to the file is Storefile,storefile the underlying is saved in hfile format.
hfile
The storage format for keyvalue data in HBase is a binary format file for Hadoop. First of all, the hfile file is indefinite, with a fixed length of only two blocks: trailer and FileInfo. Trailer pointers point to the starting point of other data blocks, FileInfo records some meta-information about the file. The Data block is the basic unit of HBase Io, and in order to improve efficiency, the hregionserver is based on the LRU block cache mechanism. The size of each data block can be specified by parameter when creating a table (default block size 64KB), large block facilitates sequential scan, small block for random query. Each data block in addition to the beginning of the magic is a keyvalue pairs of stitching, magic content is some random numbers, the purpose is to prevent data corruption, the structure is as follows.
The hfile structure diagram is as follows:
The data block segment is used to hold the table, and this part can be compressed. The Meta block segment (optional) is used to save user-defined KV segments, which can be compressed. The FileInfo segment is used to save hfile meta-information, which cannot be compressed, and users can add their own meta-information in this section. The Data Block index segment (optional) is used to hold the index of the meta Blcok. Trailer This paragraph is fixed-length. The offset of each segment is saved, and when a hfile is read, the first reading of the Trailer,trailer saves the starting position of each segment (the magic number of the segment is used for the security check), and then DataBlock index is read into memory, so that When retrieving a key, you do not need to scan the entire hfile, but simply find the block where key is located in memory, read the entire block into memory with one disk IO, and then find the key you need. DataBlock index is eliminated by LRU mechanism. The hfile data Block,meta Block is typically stored in compression, which reduces network IO and disk IO Significantly, with the overhead of, of course, CPU compression and decompression. (Note: DataBlock index is defective.) A) consume too much memory B) boot load time is slow)
HLog
HLog (Wal log): Wal means write ahead log, used for disaster recovery use, HLog record all changes to the data, once the region server down, you can recover from the log.
Logflusher
Periodically writes the information in the cache to the log file
Logroller
Manage maintenance of log files
Iv. Deployment installation steps for Hbase
There are two ways to install HBase, stand-alone and distributed installations.
1. Single-machine installation hbase needs to be run again in the Hadoop environment, so installing hbase is a prerequisite for installing the Hadoop environment. The installation of the HADOOP environment can be referred to http://blog.csdn.net/u010330043/article/details/51235373. Download the hbase-0.98.11-hadoop2-bin.tar.gz package that matches the Hadoop2.2.0.
The installation steps for HBase are as follows.
Step one unzip the hbase-0.98.11-hadoop2-bin.tar.gz to the specified directory (here in/usr/java) and assign the permissions to the Hadoop user (the account running Hadoop).
The download here is the Hbase-0.98.11-hadoop2,hadoop cluster using 2.2.0, unzip it to/usr/java and rename it to HBase
[root@cs0 java]$ tar -zxvf hbase-0.98.11-hadoop2-bin.tar.gz[root@cs0 java]$ mv hbase-0.98.11-hadoop2 hbase[root@cs0 java]$ chown -Rhadoop:hadoop hbase
Step Two Configure the environment variables for HBase into the/etc/profile file.
[[email protected] java]$ vi /etc/profileHBASE_HOME=/usr/java/hbasePATH=$JAVA_HOME/bin:$HIVE_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$PATH
Make the configuration file effective immediately.
[root@cs0 tmp]# source /etc/profile
Step three modify the conf/hbase-env.sh.
1) 去掉 JAVA_HOME 前的 “#”,并将其修改成自己安装的 Java 路径。 2) 去掉 HBASE_MANAGES_ZK 前的 “#”,并设置其值为 true(HBase 管理自己的 ZooKeeper,这样就不需要安装 ZooKeeper)。
Step four open Conf/hbase-site.xml and add the following.
The hbase.rootdir needs to correspond to the Fs.default.name attribute values in the Conf/core-site.xml file in the previously installed Hadoop directory.
Fs.default.name set to Hdfs://ywendeng:9000/hbase.rootdir is set to Hdfs://ywendeng:9000/hbasehbase. Zookeeper.quorum set to Ywendenghbase.tmp.dir is set to the TMP directory created previously:/USR/JAVA/HBASE/TMP code is as follows:<configuration><property > <name>Fs.default.name</name> <value>Hdfs://cs:9000/</value></Property ><property > <name>Hbase.rootdir</name> <value>Hdfs://cs:9000/hbase</value></Property ><property > <name>HBase. Zookeeper.quorum</name> <value>Cs</value></Property ><property > <name>Hbase.tmp.dir</name> <value>/home/hadoop/data/hbase_${user.name}</value></Property ></configuration>
Step five start hbase (if Hadoop is already started).
[hadoop@cs0 conf]$ start-hbase.sh
Step six after HBase starts successfully, enter the URL 192.168.80.128:60010 in the browser (note: 192.168.1.128 is the IP address of the installation HBase virtual machine)
2. Distributed installation
Step one upload hbase installation package
Step two unzip
Step three Configure HBase cluster, to modify 3 files (first ZK cluster is already installed)
Note: To put Hadoop hdfs-site.xml and Core-site.xml under hbase/conf
1、vim hbase-env.sh export JAVA_HOME=/usr/java/jdk1.7.0_55 //告诉hbase使用外部的zk export HBASE_MANAGES_ZK=false
2. Vim Hbase-site.xml<configuration> <!--Specify the path that HBase stores on HDFs-- <property > <name>Hbase.rootdir</name> <value>Hdfs://ns1/hbase</value> </Property > <!--specify hbase to be distributed-- <property > <name>hbase.cluster.distributed</name> <value>True</value> </Property > <!--Specify the address of ZK, multiple with "," split -- <property > <name>Hbase.zookeeper.quorum</name> <value>cs3:2181,cs4:2181,cs5:2181</value> </Property > </configuration>
3、vim regionservers cs3 cs4 cs5
4、拷贝hbase到其他节点 scp -r app/hbase-0.96.2-hadoop2/cs2:/home/hadoop/app/ scp -r app/hbase-0.96.2-hadoop2/ cs3/home/hadoop/app/ scp -r app/hbase-0.96.2-hadoop2/ cs4/home/hadoop/app/ scp -r app/hbase-0.96.2-hadoop2/ cs5/home/hadoop/app/
Step four start all hbase
start启动hbase集群start-dfs.sh启动hbase,在主节点上运行:start-hbase.sh
Step five access the HBase administration page through a browser
192.168.80.128:60010
Step six to ensure the reliability of the cluster, to start multiple hmaster
start master
Detailed HBase architecture principles and installation deployment steps