The integration of Phoenix 4.3.0 and HBASE0.98.10-HADOOP2

Source: Internet
Author: User

Description: The Phoenix query engine translates SQL queries into one or more hbase scan and orchestrates execution to produce a standard JDBC result set. Directly using the HBase API, the co-processor and the custom filter, the performance magnitude is milliseconds for simple queries, and the performance magnitude is seconds for millions. More Reference website: http://phoenix.apache.org/

This paper mainly introduces the integration process of Phoenix 4.3.0 and hbase0.98.10-hadoop2 in detail.

Download the latest Phoenix pack
Http://apache.mesi.com.ar/phoenix/phoenix-4.3.0/bin/phoenix-4.3.0-bin.tar.gz
Copy to the server that needs to be tested and unzip it
TAR-XVF phoenix-4.3.0-bin.tar.gz
ReplacePhoenix-4.3.0-server.jar
Replace the Phoenix dependency package in Regionserver and Hmaster with the new version
Regisonserver deployed on hadoop104,hadoop108, Hmaster deployed on hadoop107
[[email protected]   hadoop]#  scp phoenix-4.3.0-bin/phoenix-4.3.0-server.jar [email protected] :/root/hadoop/hbase-0.98.10-hadoop2/lib  
[[email protected]   hadoop]#  scp phoenix-4.3.0-bin/phoenix-4.3.0-server.jar [email protected]  8:/root/hadoop/hbase-0.98.10-hadoop2/lib  
[[email protected]   hadoop]#  scp phoenix-4.3.0-bin/phoenix-4.3.0-server.jar [email protected]  7:/root/hadoop/hbase-0.98.10-hadoop2/lib  
Restart regionserver
Log in to the Hmaster node (hadoop107) and restart the HBase service
[[email protected] hbase-0.98.10-hadoop2]# CD bin/[[email protected] bin]#./stop-hbase.sh stopping HBase ..... hadoop107:stopping zookeeper.hadoop104:stopping zookeeper.hadoop108:stopping Zookeeper, and so on. [[email protected] bin]#./start-hbase.sh hadoop107:starting Zookeeper, logging to/root/hadoop/ Hbase-0.98.10-hadoop2/logs/hbase-root-zookeeper-hadoop107.outhadoop108:starting Zookeeper, logging To/root/hadoop /hbase-0.98.10-hadoop2/logs/hbase-root-zookeeper-hadoop108.outhadoop104:starting Zookeeper, logging to/root/ Hadoop/hbase-0.98.10-hadoop2/logs/hbase-root-zookeeper-hadoop104.outstarting Master, logging to/root/hadoop/ Hbase-0.98.10-hadoop2/logs/hbase-root-master-hadoop107.outhadoop104:starting Regionserver, logging To/root/hadoop /hbase-0.98.10-hadoop2/logs/hbase-root-regionserver-hadoop104.outhadoop108:starting Regionserver, logging To/root /hadoop/hbase-0.98.10-hadoop2/logs/hbase-root-regionserver-hadoop108.out
Configuring the client's Classpath
Log in to install the Phoenix Machine (hadoop105). configures the client's classpath, which contains Phoenix-4.3.0-client.jar, configuring/etc/profile file Export Phoenix_home=/root/hadoop/phoenix-4.3.0-binexport CLASSPATH= .: $JAVA _home/jre/lib/rt.jar: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jar:/phoenix_home/ Phoenix-4.3.0-client.jar
Executive Sqlline
Enter Phoenix's The Bin directory that executes the./sqlline.py hadoop107,hadoop108,hadoop104 command appears with the following exception:
15/03/10 10:14:08 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable15/ 03/10 10:14:10 WARN IPC. Coprocessorrpcchannel:call failed on IOExceptionorg.apache.hadoop.hbase.DoNotRetryIOException: Org.apache.hadoop.hbase.DoNotRetryIOException:SYSTEM. CATALOG:org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks (ljava/util/collection; ljava/util/collection;) Vat org.apache.phoenix.util.ServerUtil.createIOException (serverutil.java:84) at Org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable (metadataendpointimpl.java:820) at Org.apache.phoenix.coprocessor.generated.metadataprotos$metadataservice.callmethod (MetaDataProtos.java:7763) at Org.apache.hadoop.hbase.regionserver.HRegion.execService (hregion.java:5890) at Org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion (hregionserver.java:3433) at Org.apache.hadoop.hbase.regionserver.HRegionServer.execService (hregionserver.java:3415) at Org.Apache.hadoop.hbase.protobuf.generated.clientprotos$clientservice$2.callblockingmethod (ClientProtos.java:30812 ) at Org.apache.hadoop.hbase.ipc.RpcServer.call (rpcserver.java:2029) at Org.apache.hadoop.hbase.ipc.CallRunner.run (callrunner.java:107) at Org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop (rpcexecutor.java:130) at Org.apache.hadoop.hbase.ipc.rpcexecutor$1.run (rpcexecutor.java:107) at Java.lang.Thread.run (thread.java:744) caused By:java.lang.NoSuchMethodError:org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks (ljava/ Util/collection; ljava/util/collection;) Vat org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable ( metadataendpointimpl.java:800) ... Ten more

The reason is that this version is incompatible with HBASE0.98.10-HADOOP2, and the solution given online is:
"You'll need to upgrade to HBase 0.98.10.1 to resolve this issue. Or,you can recompile Phoenix from source with-dhbase.version=0.98.10 (or 0.98.10.1) "Considering the complexity of the cluster environment, the plan is to recompile the Phoenix
re-compilingPhoenix
1 First download Phoenix 4.3.0 Source: http://apache.spinellicreations.com/phoenix/phoenix-4.3.0/src/
2 after decompression, modify the version of HBase in the Pom.xml file to 0.98.10,hadoop-two.version to 2.5.2
3 Executing MVN for compiling
$ MVN process-sources
$ MVN package-dskiptests
4 When the compilation is complete, the Phoenix-4.3.0-src\phoenix-assembly\target directory
Phoenix-4.3.0-server.jar Replace the Phoenix-4.3.0-server.jar in Hmaster,hregionserver 
Phoenix-4.3.0-client.jar Replace the Hadoop105 in the Phoenix-4.3.0-client.jar
Restart HBase When you are done replacing
Verifying the installation
At this point, execute the sqlline.py command and query all tables:
Test Installation
Some test scripts are included by default, and you can observe the changes in the data by executing these test scripts.
querying the Web_stat data
0:jdbc:phoenix:hadoop107,hadoop108,hadoop104> SELECT * FROM web_stat;+------+---------------------------------- --------+------------------------------------------+-------------------------+--------------------------------- ---------+------------------------------------------+---+|                  HOST |                 DOMAIN |          FEATURE |                   DATE |                    CORE |   DB | |+------+------------------------------------------+------------------------------------------+---------------- ---------+------------------------------------------+------------------------------------------+---+| EU | apple.com | Mac | 2013-01-01 01:01:01.000 | 35 | 22 | 3 | | EU | apple.com |                              Store      | 2013-01-03 01:01:01.000 | 345 | 722 | 1 | | EU | google.com | Analytics | 2013-01-13 08:06:01.000 | 25 | 2 | 6 | | EU | google.com | Search | 2013-01-09 01:01:01.000 | 395 | 922 | 1 | | EU | Salesforce.com | Dashboard | 2013-01-06 05:04:05.000 | 12 | 22 | 4 | | EU | Salesforce.com | Login | 2013-01-12 01:01:01.000 | 5 | 62 | 1 | | EU |              Salesforce.com             | Reports | 2013-01-02 12:02:01.000 | 25 |                                     11


The integration of Phoenix 4.3.0 and HBASE0.98.10-HADOOP2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.