Modify of the HBase Programming API Starter Series (management side) (10)

Source: Internet
Author: User
Tags zookeeper

 

Here, I lead you to learn more advanced, because, in the development, try not to go to the server to modify the table.

Therefore, the HBase table is modified on the management side. A thread pool approach (also in production development)

Package zhouls.bigdata.HbaseProject.Pool;

Import java.io.IOException;
Import Java.util.concurrent.ExecutorService;
Import java.util.concurrent.Executors;

Import org.apache.hadoop.conf.Configuration;
Import org.apache.hadoop.hbase.HBaseConfiguration;
Import org.apache.hadoop.hbase.client.HConnection;
Import Org.apache.hadoop.hbase.client.HConnectionManager;


public class Tableconnection {
Private Tableconnection () {
}
private static hconnection connection = NULL;
public static Hconnection getconnection () {
if (connection = = null) {
Executorservice pool = Executors.newfixedthreadpool (10);//build a fixed-size thread pool
Configuration conf = hbaseconfiguration.create ();
Conf.set ("Hbase.zookeeper.quorum", "hadoopmaster:2181,hadoopslave1:2181,hadoopslave2:2181");
try{
Connection = Hconnectionmanager.createconnection (conf,pool);//When creating a connection, get the configuration file and the thread pool
}catch (IOException e) {
}
}
return connection;
}
}

1. Modify HBase Table

For the time being, there are errors

Package zhouls.bigdata.HbaseProject.Pool;

Import java.io.IOException;

Import zhouls.bigdata.HbaseProject.Pool.TableConnection;

Import Javax.xml.transform.Result;

Import org.apache.hadoop.conf.Configuration;
Import Org.apache.hadoop.hbase.Cell;
Import Org.apache.hadoop.hbase.CellUtil;
Import org.apache.hadoop.hbase.HBaseConfiguration;
Import Org.apache.hadoop.hbase.HColumnDescriptor;
Import Org.apache.hadoop.hbase.HTableDescriptor;
Import org.apache.hadoop.hbase.MasterNotRunningException;
Import Org.apache.hadoop.hbase.NamespaceDescriptor;
Import Org.apache.hadoop.hbase.TableName;
Import org.apache.hadoop.hbase.ZooKeeperConnectionException;
Import Org.apache.hadoop.hbase.client.Delete;
Import Org.apache.hadoop.hbase.client.Get;
Import org.apache.hadoop.hbase.client.HBaseAdmin;
Import org.apache.hadoop.hbase.client.HTable;
Import Org.apache.hadoop.hbase.client.HTableInterface;
Import Org.apache.hadoop.hbase.client.Put;
Import Org.apache.hadoop.hbase.client.ResultScanner;
Import Org.apache.hadoop.hbase.client.Scan;
Import org.apache.hadoop.hbase.util.Bytes;

Public class Hbasetest {

public static void Main (string[] args) throws Exception {
htable table = new Htable (GetConfig (), tablename.valueof ("test_table"));//table name is test_table
Put put = new put (Bytes.tobytes ("row_04"));//Line key is Row_04
Put.add (Bytes.tobytes ("F"), Bytes.tobytes ("name"), Bytes.tobytes ("Andy1"));//column cluster is f, column modifier is name, value is Andy0
Put.add (Bytes.tobytes ("F2"), Bytes.tobytes ("name"), Bytes.tobytes ("Andy3"));//column cluster is f2, column modifier is name, value is Andy3
Table.put (Put);
Table.close ();

Get get = new Get (Bytes.tobytes ("row_04"));
Get.addcolumn (Bytes.tobytes ("F1"), Bytes.tobytes ("Age")); If you don't specify it now, take it all out by default.
Org.apache.hadoop.hbase.client.Result rest = Table.get (get);
System.out.println (Rest.tostring ());
Table.close ();

Delete delete = new Delete (Bytes.tobytes ("row_2"));
Delete.deletecolumn (Bytes.tobytes ("F1"), bytes.tobytes ("email"));
Delete.deletecolumn (Bytes.tobytes ("F1"), Bytes.tobytes ("name"));
Table.delete (delete);
Table.close ();


Delete delete = new Delete (Bytes.tobytes ("row_04"));
Delete.deletecolumn (Bytes.tobytes ("F"), Bytes.tobytes ("name")),//deletecolumn is the latest timestamp version in a column cluster.
Delete.deletecolumns (Bytes.tobytes ("F"), Bytes.tobytes ("name")),//delete.deletecolumns is the deletion of all timestamp versions in a column cluster.
Table.delete (delete);
Table.close ();


Scan scan = new scan ();
Scan.setstartrow (Bytes.tobytes ("row_01"));//contains the start line key
Scan.setstoprow (Bytes.tobytes ("row_03"));//does not contain the end row key
Scan.addcolumn (Bytes.tobytes ("F"), Bytes.tobytes ("name"));
Resultscanner rst = Table.getscanner (scan);//entire cycle
System.out.println (Rst.tostring ());
for (Org.apache.hadoop.hbase.client.Result next = Rst.next (); Next!=null;next = Rst.next ())
// {
For (Cell cell:next.rawCells ()) {/////A row key under the bad
System.out.println (Next.tostring ());
System.out.println ("Family:" + bytes.tostring (cellutil.clonefamily (cell)));
System.out.println ("col:" + bytes.tostring (Cellutil.clonequalifier (cell)));
System.out.println ("value" + bytes.tostring (Cellutil.clonevalue (cell)));
//}
// }
Table.close ();

Hbasetest hbasetest =new hbasetest ();
Hbasetest.insertvalue ();
Hbasetest.getvalue ();
Hbasetest.delete ();
Hbasetest.scanvalue ();
Hbasetest.createtable ("Test_table3", "f");//Determine if the table exists before creating the HBase table (production development first)
Hbasetest.deletetable ("Test_table4");//Determine if the table exists before deleting the HBase table (production development first)
Hbasetest.modifytable ("test_table", "row_02", "f", ' f:age ');
}


In production development, it is recommended to use a thread pool to do
public void Insertvalue () throws exception{
Htableinterface table = Tableconnection.getconnection (). GetTable (Tablename.valueof ("test_table"));
Put put = new put (Bytes.tobytes ("row_01"));//Line key is row_01
Put.add (Bytes.tobytes ("F"), Bytes.tobytes ("name"), Bytes.tobytes ("Andy0"));
Table.put (Put);
Table.close ();
//}



In production development, it is recommended to use a thread pool to do
public void GetValue () throws exception{
Htableinterface table = Tableconnection.getconnection (). GetTable (Tablename.valueof ("test_table"));
Get get = new Get (Bytes.tobytes ("row_03"));
Get.addcolumn (Bytes.tobytes ("F"), Bytes.tobytes ("name"));
Org.apache.hadoop.hbase.client.Result rest = Table.get (get);
System.out.println (Rest.tostring ());
Table.close ();
//}
//

In production development, it is recommended to use a thread pool to do
public void Delete () throws exception{
Htableinterface table = Tableconnection.getconnection (). GetTable (Tablename.valueof ("test_table"));
Delete delete = new Delete (Bytes.tobytes ("row_01"));
Delete.deletecolumn (Bytes.tobytes ("F"), Bytes.tobytes ("name")),//deletecolumn is the latest timestamp version in a column cluster.
Delete.deletecolumns (Bytes.tobytes ("F"), Bytes.tobytes ("name")),//delete.deletecolumns is the deletion of all timestamp versions in a column cluster.
Table.delete (delete);
Table.close ();
//
// }

In production development, it is recommended to use a thread pool to do
public void Scanvalue () throws exception{
Htableinterface table = Tableconnection.getconnection (). GetTable (Tablename.valueof ("test_table"));
Scan scan = new scan ();
Scan.setstartrow (Bytes.tobytes ("row_02"));//contains the start line key
Scan.setstoprow (Bytes.tobytes ("row_04"));//does not contain the end row key
Scan.addcolumn (Bytes.tobytes ("F"), Bytes.tobytes ("name"));
Resultscanner rst = Table.getscanner (scan);//entire cycle
System.out.println (Rst.tostring ());
for (Org.apache.hadoop.hbase.client.Result next = Rst.next (); Next!=null;next = Rst.next ())
// {
For (Cell cell:next.rawCells ()) {/////A row key under the bad
System.out.println (Next.tostring ());
System.out.println ("Family:" + bytes.tostring (cellutil.clonefamily (cell)));
System.out.println ("col:" + bytes.tostring (Cellutil.clonequalifier (cell)));
System.out.println ("value" + bytes.tostring (Cellutil.clonevalue (cell)));
//}
// }
Table.close ();
// }
//


In production development, it is recommended to use a thread pool to do
public void CreateTable (String tablename,string family) throws Masternotrunningexception, Zookeeperconnectionexception, ioexception{
Configuration conf = hbaseconfiguration.create (GetConfig ());
Hbaseadmin admin = new hbaseadmin (conf);
Htabledescriptor Tabledesc = new Htabledescriptor (tablename.valueof (TableName));
Hcolumndescriptor HCD = new Hcolumndescriptor (family);
Hcd.setmaxversions (3);
hcd.set//a lot of with the create operation, I'm just here to play a role
Tabledesc.addfamily (HCD);
if (!admin.tableexists (TableName)) {
Admin.createtable (TABLEDESC);
}else{
System.out.println (tableName + "exist");
}
Admin.close ();
}


public void modifytable (String tablename,string rowkey,string family,hcolumndescriptor hcolumndescriptor) throws Masternotrunningexception, Zookeeperconnectionexception, ioexception{
Configuration conf = hbaseconfiguration.create (GetConfig ());
Hbaseadmin admin = new hbaseadmin (conf);
Htabledescriptor Tabledesc = new Htabledescriptor (tablename.valueof (TableName));
Hcolumndescriptor HCD = new Hcolumndescriptor (family);
Namespacedescriptor NSD = Admin.getnamespacedescriptor (tableName);
Nsd.setconfiguration ("Hbase.namespace.quota.maxregion", "10");
Nsd.setconfiguration ("Hbase.namespace.quota.maxtables", "10");
if (admin.tableexists (TableName)) {
Admin.modifycolumn (TableName, HCD);
Admin.modifytable (TableName, Tabledesc);
Admin.modifynamespace (NSD);
}else{
System.out.println (TableName + "not exist");
}
Admin.close ();
}


In production development, it is recommended to use a thread pool to do
public void deletetable (String tableName) throws Masternotrunningexception, Zookeeperconnectionexception, ioexception{
Configuration conf = hbaseconfiguration.create (GetConfig ());
Hbaseadmin admin = new hbaseadmin (conf);
if (admin.tableexists (TableName)) {
Admin.disabletable (TableName);
Admin.deletetable (TableName);
}else{
System.out.println (TableName + "not exist");
// }
Admin.close ();
// }




public static Configuration GetConfig () {
Configuration configuration = new configuration ();
Conf.set ("Hbase.rootdir", "hdfs:hadoopmaster:9000/hbase");
Configuration.set ("Hbase.zookeeper.quorum", "hadoopmaster:2181,hadoopslave1:2181,hadoopslave2:2181");
return configuration;
}
}

Modify of the HBase Programming API Starter Series (management side) (10)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.