Application of distributed Database Hbase-1.2.4+zookeeper installation and additions and deletions in Big data series

Source: Internet
Author: User
Tags zookeeper aliyun

Described earlier about the deployment and use of hbase 0.9.8, the latest version of HBase1.2.4 's deployment and use, there are some differences, as described below:

1. Environment Readiness:

1. Need to install under normal conditions in hadoop[hadoop-2.7.3], Hadoop installation can refer to LZ's article Big data series of Hadoop distributed cluster deployment

2. Data Pack zookeeper-3.4.9.tar.gz,hbase-1.2.4-bin.tar.gz

2. Installation steps:

1. Installing zookeeper

1. Unzip the zookeeper-3.4.9.tar.gz

CDTAR-XZVF Zookeeper-3.4.9.tar.gzll zookeeper-3.4.9

2. New Configuration Conf/zoo.cfg

# The number of milliseconds of each tickticktime=2000# the number of ticks, the initial # synchronization phase can T akeinitlimit=10# the number of ticks so can pass between # Sending a request and getting an acknowledgementsynclimit=5# The directory where the snapshot is stored.# does not use/tmp for storage,/tmp here is just # example sakes.datadir=/home/  mfz/zookeeper-3.4.9/zookeeperdata# the port at which the clients would connectclientport=2181# the maximum number of the client connections.# increase this if you need to handle more clients#maxclientcnxns=60## is sure to read the maintenance sectio N of the # Administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/ zookeeperadmin.html#sc_maintenance## the number of snapshots to retain in datadirautopurge.snapretaincount=3# Purge task Interval in hours# Set to "0" to disable Auto purge featureautopurge.purgeinterval=1

3. Start ZK

jpsbin/zkserver.sh start

4. View ZK Port 2181 Status

echo Stat | NC Master 2181

2. Installing HBase-1.2.4

1. Put the HBase compression package into the user ~/resources

2. Execute command, CP to user root directory, unzip

CP resources/hbase-1.2.4-bin.tar.gz.

TAR-XZVF hbase-1.2.4-bin.tar.gz

ll hbase-1.2.4

3. Configure conf/hbase-env.sh

... # The Java implementation to use.  Java 1.7+ required.# Export Java_home=/usr/java/jdk1.6.0/export java_home=/usr/java/jdk1.8.0_102/
# Extra Java Runtime options.# Below is what we set by default. May is work with SUN jvm.# for more on why as well as other possible settings,# see Http://wiki.apache.org/hadoop/Perfor Mancetuningexport hbase_opts= "-XX:+USECONCMARKSWEEPGC" # Configure PermSize. Only needed in JDK7. Can safely remove it for Jdk8+export hbase_master_opts= "$HBASE _master_opts-xx:permsize=128m-xx:maxpermsize=128m" Export hbase_regionserver_opts= "$HBASE _regionserver_opts-xx:permsize=128m-xx:maxpermsize=128m" # Tell HBASE whether It should manage it ' s own instance of Zookeeper or Not.export Hbase_manages_zk=false

...

4. Configure Conf/hbase-site.xml

<configuration>        <property>                <name>hbase.cluster.distributed</name>                < value>true</value>        </property>        <property>                <name>hbase.rootdir</name >                <value>hdfs://master:9000/hbase</value>        </property>        <property>                <name>hbase.zookeeper.quorum</name>                <value>master</value>        </property> </configuration>

5. Configure Conf/regionservers to replace the content with slave (cluster from node host hostname)

6. Configure the environment variables, which can be configured in. base_profile, or under root User Configuration/etc/profile. Note To configure source {filename} to take effect

#HBase configexport hbase_home=/home/mfz/hbase-1.2.4export path= $HBASE _home/bin: $PATHexport hadoop_classpath=$ hbase_home/lib/*

7. Transfer the HBase installation package to the slave cluster node after configuration is complete

CD   scp-r hbase-1.2.4 slave:~/

8. Start HBase and go to the installation directory:

bin/start-hbase.sh

9. Verify HBase and enter master browser. This step differs from the 0.9.8 version in that the port is also changed from 60010 to 16010, and the boot interface is started successfully as follows

10. Enter HBase Shell to perform add, delete, change, and check operations (consistent with HBase 0.9.8 shell command) No other instructions

#shell command as follows # Open HBase shellbin/hbase shell# CREATE TABLE Hbasename, there are two column family ' one ' and ' two ' create ' hbasename ', ' single ', ' Two ' #查看表list # View table structure describe ' hbasetest ' #插入数据put ' hbasename ', ' test1 ', ' One ', ' HelloWorld ', # View data scan ' hbasename ' get ' hbasename ', ' Test1 ' #修改表结构 (added column family ' three ') alter ' hbasename ', name= ' three ' #删除表
Disable ' hbasename ' drop ' hbasename '

More HBase shell commands See official website http://hbase.apache.org/book.html#shell_exercises

3.hbase-demo

  

1.baseconfig.java

Package Hbase.base;import Org.apache.hadoop.conf.configuration;import org.apache.hadoop.hbase.HBaseConfiguration; Import Org.apache.hadoop.hbase.client.connection;import org.apache.hadoop.hbase.client.connectionfactory;/** * @ Author Mengfanzhu * @Package hbase.base * @Description: * @date 17/3/16 10:59 */public class Baseconfig {    /**     * created HBase Connection     * @return     *    /public static Connection getconnection () throws exception{        Configuration conf = Hbaseconfiguration.create ();        Conf.set ("Hbase.zookeeper.property.clientPort", "2181");        Conf.set ("Hbase.zookeeper.quorum", "10.211.55.5");        Conf.set ("Hbase.master", "10.211.55.5:9000");        Connection conn = connectionfactory.createconnection (conf);        Return conn;}    }

2.basedao.java

package Hbase.base;import Org.apache.hadoop.hbase.htabledescriptor;import org.apache.hadoop.hbase.client.*;/** * @author Mengfanzhu * @Package Base * @Description: * @date 17/3/16 10:58 */public I Nterface Basedao {/** * CREATE TABLE * @param tabledescriptor */public void createtable (Htabledescriptor tabled    Escriptor) throws Exception; /** * New Data * @param putdata * @param tableName * * public void PutData (Put putdata,string tableName) t    Hrows Exception;  /** * Delete Data * @param deldata * @param tableName * * * public void Deldata (delete deldata,string tableName)    Throws Exception; /** * Query Data * @param scan * @param tableName * @return */public Resultscanner scandata (scan scan,st    Ring tableName) throws Exception; /** * Query Data * @param get * @param tableName * @return */public Result getData (get get,string Tablen AME) throws Exception;} 

 3.basedaoimpl.java

Package Hbase.base;import Org.apache.hadoop.hbase.hcolumndescriptor;import Org.apache.hadoop.hbase.htabledescriptor;import Org.apache.hadoop.hbase.tablename;import Org.apache.hadoop.hbase.client.*;import org.slf4j.logger;import org.slf4j.loggerfactory;/** * @author Mengfanzhu * @ Package Hbase.base * @Description: Base Service Implementation * @date 17/3/16 11:11 */public class Basedaoimpl implements Basedao {Stati    C Logger Logger = Loggerfactory.getlogger (Basedaoimpl.class); /** * CREATE TABLE * @param tabledescriptor */public void createtable (Htabledescriptor tabledescriptor) throws Exce        ption{Admin admin = baseconfig.getconnection (). Getadmin (); Determine if TableName exists if (!admin.tableexists (Tabledescriptor.gettablename ())) {admin.createtable (TABLEDESCR        Iptor);    } admin.close ();  } public void Addtablecolumn (String tablename,hcolumndescriptor columndescriptor) throws Exception {Admin admin        = Baseconfig.getconnection (). Getadmin (); AdMin.addcolumn (Tablename.valueof (TableName), columndescriptor);    Admin.close (); }/** * Added data * @param putdata * @param tableName * * * public void PutData (Put putdata,string Tablena        Me) throws exception{table table = Baseconfig.getconnection (). GetTable (Tablename.valueof (TableName));        Table.put (PutData);    Table.close (); }/** * Delete data * @param deldata * @param tableName * * * public void Deldata (delete deldata,string table        Name) throws exception{Table table = Baseconfig.getconnection (). GetTable (Tablename.valueof (TableName));        Table.delete (Deldata);    Table.close (); /** * Query Data * @param scan * @param tableName * @return */public Resultscanner scandata (scan s Can,string tableName) throws exception{Table table = Baseconfig.getconnection (). GetTable (Tablename.valueof (Tablena        Me));        Resultscanner rs = table.getscanner (scan);        Table.close (); Return RS;    /** * Query Data * @param get * @param tableName * @return */public Result getData (get Get,str        ing tableName) throws exception{Table table = Baseconfig.getconnection (). GetTable (Tablename.valueof (tableName));        Result result = Table.get (get);        Table.close ();    return result; }}

 4.studentsserviceimpl.java

Package Hbase.students;import Hbase.base.basedao;import Hbase.base.basedaoimpl;import Org.apache.hadoop.hbase.hcolumndescriptor;import Org.apache.hadoop.hbase.htabledescriptor;import Org.apache.hadoop.hbase.tablename;import Org.apache.hadoop.hbase.client.*;import Org.apache.hadoop.hbase.util.bytes;import java.util.hashmap;import java.util.map;/** * @author Mengfanzhu * @Package Hbase.students * @Description: Students service * @date 17/3/16 11:36 */public class Studentsserviceimpl {private Basedao ba    Sedao = new Basedaoimpl ();    Private static final String table_name = "t_students";    private static final String Stu_row_name = "Stu_row1";    private static final byte[] Family_name_1 = bytes.tobytes ("NAME");    private static final byte[] family_name_2 = bytes.tobytes ("Age");    private static final byte[] Family_name_3 = bytes.tobytes ("scores"); public void Createstutable () throws exception{//create tablename, column family Htabledescriptor tabledescriptor = new HT Abledescriptor (TabLENAME.VALUEOF (table_name));        Hcolumndescriptor columndescriptor_1 = new Hcolumndescriptor (family_name_1);        Hcolumndescriptor columndescriptor_2 = new Hcolumndescriptor (family_name_2);        Hcolumndescriptor columndescriptor_3 = new Hcolumndescriptor (family_name_3);        Tabledescriptor.addfamily (columndescriptor_1);        Tabledescriptor.addfamily (columndescriptor_2);        Tabledescriptor.addfamily (columndescriptor_3);    Basedao.createtable (Tabledescriptor); }/** * Insert data < column family name, value > * @param bytes */public void Putstudata (map<byte[],byte[]> bytes) thro        WS exception{put put = new put (Bytes.tobytes (stu_row_name));;        int i = 1;             For (byte[] FamilyNames:bytes.keySet ()) {Put.addcolumn (Familynames, Bytes.get (familynames), bytes.tobytes (0));        i++;    } basedao.putdata (put, table_name); } public resultscanner Scandata (map<byte[],byte[]> bytes) throws exception{Scan SCan = new Scan ();        For (byte[] FamilyNames:bytes.keySet ()) {Scan.addcolumn (Familynames, Bytes.get (familynames));        } scan.setcaching (100);        Resultscanner results = Basedao.scandata (scan,table_name);    return results; } public void Delstudata (String rowid,byte[] familyname,byte[] qualifiername) throws exception{Delete delete =        New Delete (Bytes.tobytes (rowId));        Delete.addcolumn (Familyname, qualifiername);    Basedao.deldata (Delete,table_name);         public static void Main (string[] args) throws Exception {Studentsserviceimpl SSI = new Studentsserviceimpl ();        CREATE table ssi.createstutable ();        Add data map<byte[],byte[]> bytes = new hashmap<byte[],byte[]> ();        Bytes.put (Family_name_1,bytes.tobytes ("Jack"));        Bytes.put (Family_name_2,bytes.tobytes ("10"));        Bytes.put (Family_name_3,bytes.tobytes ("o:90,t:89,s:100"));        Ssi.putstudata (bytes); View Data Map<byte[],byte[]> Bytescans = new hashmap<byte[], byte[]> ();        Resultscanner results = Ssi.scandata (Bytescans);             for (Result result:results) {while (Result.advance ()) {System.out.println (result.current ()); }        }    }}

5.pom.xml

<?xml version= "1.0" encoding= "UTF-8"? ><project xmlns= "http://maven.apache.org/POM/4.0.0" xmlns:xsi= "http: Www.w3.org/2001/XMLSchema-instance "xsi:schemalocation=" http://maven.apache.org/POM/4.0.0/http Maven.apache.org/xsd/maven-4.0.0.xsd "> <modelVersion>4.0.0</modelVersion> <groupId> Mfz.hbase</groupid> <artifactId>hbase-demo</artifactId> <version>1.0-snapshot</ version> <repositories> <repository> <id>aliyun</id> <url&gt ;http://maven.aliyun.com/nexus/content/groups/public/</url> </repository> </repositories> & lt;dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <a        Rtifactid>hadoop-common</artifactid> <version>2.6.0</version> </dependency> <dependency> <groupid>org.apache.hbase</groupid> <artifactId>hbase-client</artifactId> <version>1.2.4</ve            rsion> </dependency> <dependency> <groupId>junit</groupId>        <artifactId>junit</artifactId> <version>4.9</version> </dependency> <!--Https://mvnrepository.com/artifact/com.yammer.metrics/metrics-core--<dependency> &L T;groupid>com.yammer.metrics</groupid> <artifactId>metrics-core</artifactId> &L t;version>2.2.0</version> </dependency> </dependencies> <build> <plugins > <plugin> <artifactId>maven-assembly-plugin</artifactId> &L T;version>2.3</version> <configuration> <classifier>dist</classi                Fier>    <appendAssemblyId>true</appendAssemblyId> <descriptorRefs> <descriptor>jar-with-dependencies</descriptor> </descriptorRefs> < /configuration> <executions> <execution> <id&gt                            ;make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </ex ecution> </executions> </plugin> </plugins> &LT;/BUILD&GT;&LT;/PR Oject>

6. Implementation results

Demo has been uploaded to GitHub Https://github.com/fzmeng/HBaseDemo

Finish ~ ~

Application of distributed Database Hbase-1.2.4+zookeeper installation and additions and deletions in Big data series

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.