Deploy custom Observer Coprocessor in Hbase0.98.4

Source: Internet
Author: User

Deploy custom Observer Coprocessor in Hbase0.98.4

Hbase supports Coprocessor (Coprocessor) Since 0.92. It is designed to enable users to run their code on the regionserver, that is, to move the computing program to the location where the data is located for calculation. This is consistent with the idea of MapReduce. Hbase Coprocess is divided into two categories: observer and endpoint. In short, observer is equivalent to a trigger in a relational database, while endpoint is equivalent to a stored procedure in a relational database. There are many documents on the introduction of HBase Coprocessor. As I have just learned, I have learned a lot from the documents contributed by many good people.

Here we record the process of deploying custom Coprocessor on a fully distributed system. This article will introduce two deployment methods: one is to configure in the hbase-site.xml; the second is to use the table descriptor to configure (alter); the former will be loaded by all region of all tables, and the latter will only load all region of the specified table. In this article, we will combine our experiment process to find out which points are error-prone.

Hadoop + HBase cloud storage creation summary PDF

Regionserver startup failed due to inconsistent time between HBase nodes

Hadoop + ZooKeeper + HBase cluster configuration

Hadoop cluster Installation & HBase lab environment setup

HBase cluster configuration based on Hadoop cluster'

Hadoop installation and deployment notes-HBase full distribution mode installation

Detailed tutorial on creating HBase environment for standalone Edition

First, let's look at the environment:

Hadoop1.updb.com 192.168.0.101 Role: master
Hadoop2.updb.com 192.168.0.102 Role: regionserver
Hadoop3.updb.com 192.168.0.103 Role: regionserver
Hadoop4.updb.com 192.168.0.104 Role: regionserver
Hadoop5.updb.com 192.168.0.105 Role: regionserver

First, encode the custom Coprocessor. The code is downloaded from the PDF file of the Hbase authoritative guide. You only modified the package name:

/**
* Coprocessor
* When you use the get command to retrieve a specific row from a table, the custom observer coprocessor is triggered.
* The trigger condition is that the rowkey specified by get is consistent with the FIXED_ROW specified in the program as @ GETTIME
* After triggering, the program will generate a keyvalue instance on the server and return the instance to the client. This kv instance is
* @ GETTIME @ is the rowkey. The column family and column identifiers are @ GETTIME @, and the column value is the server time.
*/
 
Package org. apache. hbase. kora. coprocessor;
 
Import java. io. IOException;
Import java. util. List;
 
Import org. apache. commons. logging. Log;
Import org. apache. commons. logging. LogFactory;
Import org. apache. hadoop. hbase. KeyValue;
Import org. apache. hadoop. hbase. client. Get;
Import org. apache. hadoop. hbase. coprocessor. BaseRegionObserver;
Import org. apache. hadoop. hbase. coprocessor. ObserverContext;
Import org. apache. hadoop. hbase. coprocessor. RegionCoprocessorEnvironment;
Import org. apache. hadoop. hbase. regionserver. HRegion;
Import org. apache. hadoop. hbase. util. Bytes;
 
Public class RegionObserverExample extends BaseRegionObserver {
Public static final Log LOG = LogFactory. getLog (HRegion. class );
Public static final byte [] FIXED_ROW = Bytes. toBytes ("@ GETTIME @@@");
 
@ Override
Public void preGet (ObserverContext <RegionCoprocessorEnvironment> c,
Get get, List <KeyValue> result) throws IOException {
LOG. debug ("Got preGet for row:" + Bytes. toStringBinary (get. getRow ()));

If (Bytes. equals (get. getRow (), FIXED_ROW )){
KeyValue kv = new KeyValue (get. getRow (), FIXED_ROW, FIXED_ROW,
Bytes. toBytes (System. currentTimeMillis ()));
LOG. debug ("Had a match, adding fake kv:" + kv );
Result. add (kv );
}
}
}

After coding, compile the class and compress it into a jar package. Right-click the class name and choose -- Export. The following window is displayed:

Select JAR file and click Next. The following window is displayed.

Specify the Save path of the jar file, and finish the compilation and packaging of the RegionObserverExample class, next we need to upload the prepared jar file to the master server of the hbase cluster using ftp, where hadoop1 is used.

For more details, please continue to read the highlights on the next page:

  • 1
  • 2
  • Next Page

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.