I. Introduction to Phoenix 1. What is Phoenix?
There are many query tools for hbase such as: Hive,tez,impala,shark/spark,phoenix, etc. Today is mainly about Phoenix. Phoenix is a Hadoop-based OLTP technology implemented on HBase that features low latency, transactional, SQL-available, and JDBC interfaces. Phoenix also offers an hbase two-level index solution that enriches the diversity of hbase queries and inherits the characteristics of the fast random query of HBase's massive data. However, in a production environment, it is not possible to use it in OLTP. In the context of online transaction processing, low latency is required, while Phoenix does some optimizations when querying hbase, but the latency is still not small. So it is still used in Olat, and then the results are returned to store.
Phoenix is written entirely in Java as a JDBC driver embedded in hbase. The Phoenix query engine translates SQL queries into one or more hbase scans and orchestrates execution to produce a standard JDBC result set. Directly using the HBase API, the co-processor and the custom filter, the performance magnitude is milliseconds for simple queries, and the performance magnitude is seconds for millions.
2. Architecture of Phoenix
3. Download of Phoenix
Select the corresponding hbase version to download, I am using the hbase-1.2.6 version,: http://mirror.bit.edu.cn/apache/phoenix/
[Email protected] ~]$ cd/opt/software/[[email protected] software]$ wget Http://mirror.bit.edu.cn/apache/phoenix /apache-phoenix-4.14.0-hbase-1.2/bin/apache-phoenix-4.14.0-hbase-1.2-bin.tar.gz[[email protected] software]$ ll-rw-rw-r--1 Admin Admin 346171547 June 5 06:16 apache-phoenix-4.14.0-hbase-1.2-bin.tar.gz
4. Cluster deployment √
node IP |
node name |
HBase |
phoniex |
Zookeeper |
Hadoop |
192.168.100.21 |
node21 |
√ |
√ |
192.168.100.22 |
node22 |
√ |
√ |
√ |
192.168.100.23 |
node23 |
√ |
√ |
√ |
Hadoop Cluster Build Reference: CentOS7.5 build Hadoop2.7.6 fully distributed cluster
Zookeeper Cluster Construction Reference: CentOS7.5 build Zookeeper3.4.12 Cluster
HBase Cluster Construction Reference: CentOS7.5 build hbase1.2.6ha Cluster
Two. Phoenix Service-side installation
Official website Installation Documentation reference: Http://phoenix.apache.org/installation.html
1. Unzip the installation package
[Email protected] software]$ tar zxvf apache-phoenix-4.14.0-hbase-1.2-bin.tar.gz-c/opt/module/[email protected ] software]$ cd/opt/module/[[email protected] module]$ MV Apache-phoenix-4.14.0-hbase-1.2-bin phoenix-4.14.0-hbase-1.2[[email protected] module]$ lldrwxr-xr-x 5 admin admin 4096 June 5 05:41 phoenix-4.14.0-hbase-1.2
2. Copy the jar package to the Hbase/lib package
Enter the installation directory in Phoenix to copy the Phoenix-4.12.0-hbase-1.2-server.jar into the HBase lib directory for each node in the cluster (the master node is also to be copied)
[Email protected] module]$ CD phoenix-4.14.0-hbase-1.2/[[email protected] phoenix-4.14.0-hbase-1.2]$ CP phoenix-4.14.0-hbase-1.2-server.jar/opt/module/hbase-1.2.6/lib/[[email protected] phoenix-4.14.0-hbase-1.2]$ SCP Phoenix-4.14.0-hbase-1.2-server.jar [email protected]:/opt/module/hbase-1.2.6/lib/[email protected] phoenix-4.14.0-hbase-1.2]$ SCP Phoenix-4.14.0-hbase-1.2-server.jar [email protected]:/opt/module/hbase-1.2.6/lib/
3. Configure the Phoenix environment variable
[email protected] phoenix-4.14.0-hbase-1.2]$ sudo vi/etc/profileexport phoenix_home=/opt/module/ Phoenix-4.14.0-hbase-1.2export path= $PATH: $PHOENIX _home/bin[[email protected] phoenix-4.14.0-hbase-1.2]$ source/ Etc/profile
4. Restart HBase
$ zkserver.sh start[[email protected] ~]$ start-dfs.sh[[email protected] ~]$ start-yarn.sh [[email protected] ~]$ start-hbase.sh
5. Start Phoenix
Start command: phoenix-4.14.0-hbase-1.2/bin/sqlline.py zookeeper
[Email protected] phoenix-4.14.0-hbase-1.2]$/bin/sqlline.py 192.168.100.21,192.168.100.22,192.168.100.23:2181Setting property: [Incremental, false]setting property: [Isolation, transaction_read_committed]issuing:!connect Jdbc:phoenix : 192.168.100.21,192.168.100.22,192.168.100.23:2181None None org.apache.phoenix.jdbc.PhoenixDriverConnecting to Jdbc:phoenix : 192.168.100.21,192.168.100.22,192.168.100.23:2181Slf4j:class path contains multiple slf4j bindings. Slf4j:found Binding in [jar:file:/opt/module/phoenix-4.14.0-hbase-1.2/phoenix-4.14.0-hbase-1.2-client.jar!/org/ Slf4j/impl/staticloggerbinder.class]slf4j:found binding in [jar:file:/opt/module/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/ Org/slf4j/impl/staticloggerbinder.class]slf4j:see http://www.slf4j.org/codes.html#multiple_bindings for an EXPLANATION.18/08/21 02:28:30 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicabl econnected To:phoenix (Version 4.14) driver:phoenixembeddeddriver (version 4.14) autocommit status:true Transaction isolation:transaction_read_committedbuilding List of tables and columns for tab-completion (set Fastconnect t o true to skip) ... 133/133 (100%) donedonesqlline version 1.2.00:jdbc:phoenix:192.168.100.21,192.168.100.22>
6. Verifying Phoenix
1) View all tables
Enter !tables to see which tables are available. The following displays are the Phoenix system tables, which maintain the metadata information for the user tables in the system tables.
Note: In, we used the table command supported by sqlline.py, which lists all the tables in HBase. It is important to note that Phoenix does not support direct display of tables created in the HBase Shell. The reason is simple, when a table is created in Phoenix, Phoenix is a reorganization of the table. The table Phoenix created on the HBase Shell is not processed, so it cannot be displayed directly. If you need to associate a table created in the HBase Shell to Phoenix, you need to create a view in Phoenix to associate it with.
2) Exit Phoenix
Enter !exit command (Ps:phoenix earlier version (2.11 version) need to enter!quilt to exit, current high version has been changed to!exit command)
Three. Phoenix Basic shell command
Tip: Below, there may be some commands that have been invalidated in Phoenix later and replaced with other commands, please note.
0:jdbc:phoenix:192.168.100.21,192.168.100.22> Help!all Execute The specified SQL against all the Curren T Connections!autocommit Set autocommit mode on or Off!batch Start or execute a BA Tch of Statements!brief Set verbose mode off!call Execute a callable Statement!close Close the current connection to the Database!closeall close all current open Connections!columns L Ist all the columns for the specified Table!commit commits the current transaction (if Autocommit is off)!conne CT Open A new connection to the database.! Dbinfo Give metadata information about the Database!describe describe a table!dropall Dro P All tables in the Database!exportedkeys List all the exported keys for the specified Table!go Select the current connection!help Print a summary of command usage!history Display the command History!importedkeys List all the imported keys for the specified table!indexes List all the indexes for the specified table!isolation Set the transaction isolation for this connection!l Ist List The current connections!manual Display the Sqlline manual!metadata obtain met Adata Information!nativesql Show The native SQL for the specified Statement!outputformat Set the output for Mat for displaying results (table,vertical,csv,tsv,xmlattrs,xmlelements)!primarykeys List all th E primary keys for the specified table!procedures List all the procedures!properties Connect to the Databa Se specified in the properties file (s)!quit Exits the Program!reconnect reconnect to the Database!r Ecord Record all output to the specified File!rehash Fetch table and column names for command comp Letion!rollback Roll back the current transaction (if Autocommit was off)!run run a script from the specified file!save Save the current Variabes and Aliases!scan scan for installed JDBC drivers!script St Art saving a script to a file!set set a sqlline variablevariable Value description============= = = ========== ================================autocommit true/false enable/disable Automatic Transaction Commitautosave True/false automatically save Preferencescolor true/false Control whether co Lor is used for Displayfastconnect true/false Skip building table/column List For Tab-completionforce True/false Continue running script even after Erro Rsheaderinterval integer The interval between which headers is displayedhistoryfile Pat H File in which to Save command history. Default is $HOME/.sqlline/history (UNIX, Linux, Mac OS), $HOME/sqlline/history (Windows) incremental true/false do not receive all rows from Server before printing the first row. Uses fewer resources, especially for long-running queries, but column Widths Incorrect.isolation level Set transaction Isolation Levelmaxcolumnwidth Integer the maximum width to use when displaying Columnsmaxheight integer the Maxi Mum height of the terminalmaxwidth integer the maximum width of the Terminalnumberformat pattern Format numbers using DecimalFormat Patternoutputforma T TABLE/VERTICAL/CSV/TSV Format mode for result displaypropertiesfile path File from which Sqlline reads Properties on startup; Default is $HOME/.sqlline/sqlline.properties (UNIX, Linux, Mac OS), $HOME/sqlline/sqlline.properties (Windows) rowLimit integer Max Imum number of rows returned from a query; Zero means no limitshowelapsedtime true/false Display execution time when Verboseshowheader true/false Show column names in query Resultsshownestederrs True/fal SE display nested errorsshowwarnings true/false Display connection warningssilent true/false be more Silenttim Eout integer Query timeout in seconds; Less than zero means no timeouttrimscripts true/false Remove trailing spaces froM lines read from script filesverbose true/false Show verbose error messages and Debug Info!sql Execute A SQL Command!tables List all the tables in the Datab Ase!typeinfo Display the type map for the current connection!verbose Set verbose mode oncomments, bug Reports, and patches go to???
Four. Squirrel Client Installation
If you prefer to use the client GUI to interact with Phoenix, download and install squirrel. Because Phoenix is a JDBC driver, integration with such tools is seamless. With squirrel, you can issue SQL statements (create tables, insert data, run queries) in the SQL tab, and examine the table metadata (that is, lists, columns, primary keys, and types) in the Objects tab.
1. Download Squirrel
Official website: http://squirrel-sql.sourceforge.net/
Download Complete:
2. Installing squirrel
Go to the directory where Squirrel-sql-3.8.1-standard.jar, CMD, execute the following command:
Java-jar Squirrel-sql-3.8.1-standard.jar
Then, in addition to selecting the installation path, the other can be selected on demand, otherwise "next" can be.
3. Configuring the Squirrel Client
(1) Download the Phoenix-4.14.0-hbase-1.2-client.jar file from the Phoenix installation directory on the server to the Lib folder under Squirrel-sql installation directory under Windows, and click Squirrel-sql.bat to start.
(2) Add a new driver (Drivers-and new Driver) as shown in:
(3) in the pop-up "Add Driver" window, set Driver "Name" as: Phoenix, "Phoenix" for: jdbc:phoenix:192.168.100.21, which, 192.168.100.21 is the address of your Phoenix server; "ClassName" is: Org.apache.phoenix.jdbc.PhoenixDriver, the final result is as follows:
When the settings are complete, click OK to close.
(4) Click on the "Alias" tab on the "Drivers" side and create the new Alias (Aliases-Aliases) like the new driver in step (2), and in the popup window, fill in the Name,username,password, These three items can be filled in arbitrarily, in the Driver option drop-down list to find the Phoenix driver we just added, and then click the "Test" button to test the connection, and then click the "Connect" button, Pop "Connection successful" indicates a successful configuration. Results such as:
Click the OK button to close the pop-up window.
(5) Double click on the new test Alias, then click the "Connect" button, open the SQL command line, under the "SQL" tab, you can write SQL command, create a new person table to test
CREATE table person (ID integer NOT NULL primary key,name varchar,age integer)
Insert a piece of data:
Upsert into the person values (1, ' Zhangsan ', 18)
Query data:
SELECT * FROM person
The query results are as follows:
(6) Go down to hbase shell to see the data we just added
You can see that our new person table has been added, so squirrel is also installed and configured successfully!!
Note: Tables added through squirrel are automatically converted to uppercase, because Phoenix is case-sensitive, so under the HBase shell, the table name should be capitalized, such as: Desc ' person ', if the desc ' person ' is an error.
Five. Use of Phoenix 1. Command line mode
The terminal interface that executes SQL from the command line is now bundled with Phoenix. To start it, execute the following command from the bin directory:
[Email protected] phoenix-4.14.0-hbase-1.2]$/bin/sqlline.py 192.168.100.21,192.168.100.22,192.168.100.23:2181
1)building tables Inserting data: Here is a demonstration of the official case, under the examples package directory under the Stock_symbol.sql content below:
CREATE TABLE IF not EXISTS stock_symbol (SYMBOL varchar not NULL PRIMARY KEY, company VARCHAR); UPSERT into Stock_symbol VALUES (' CRM ', ' Salesforce.com '); SELECT * from Stock_symbol;
To execute SQL scripts from the command line, you can include the SQL file parameters as follows:
[Email protected] phoenix-4.14.0-hbase-1.2]$ sqlline.py 192.168.100.21,192.168.100.22,192.168.100.23:2181. examples/Stock_symbol.sql Setting Property: [Incremental, false]setting property: [Isolation, transaction_read_committed]setting property: [Run,./examples/Stock_symbol.sql]issuing:!connect jdbc:phoenix:192.168.100.21,192.168.100.22,192.168.100.23:2181None None org.apache.phoenix.jdbc.PhoenixDriverConnecting to Jdbc:phoenix : 192.168.100.21,192.168.100.22,192.168.100.23:2181Slf4j:class path contains multiple slf4j bindings. Slf4j:found Binding in [jar:file:/opt/module/phoenix-4.14.0-hbase-1.2/phoenix-4.14.0-hbase-1.2-client.jar!/org/ Slf4j/impl/staticloggerbinder.class]slf4j:found Binding in [jar:file:/opt/module/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/ Slf4j/impl/staticloggerbinder.class]slf4j:see http://www.slf4j.org/codes.html#multiple_bindings for an explanation.18/08/21 07:53:36 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes whereapplicableconnected To:phoenix (version 4.14) driver:phoenixembeddeddriver (version 4.14) autocommit Status:trueTransaction isolation:transaction_read_committedbuilding List of tables and columns for tab-completion ( Set Fastconnect to True to skip) ... 136/136 (100%) DONEDONE1/4 CREATE TABLE IF not EXISTS stock_symbol (SYMBOL VARCHAR not NULL PRIMARY KEY, company V Archar); No rows affected (3.818 seconds) 2/4 UPSERT into Stock_symbol VALUES (' CRM ', ' Salesforce.com '); 1 row affected (0.166 seconds) 3/4 SELECT * from stock_symbol;+---------+-----------------+| SYMBOL | Company |+---------+-----------------+| CRM | Salesforce.com |+---------+-----------------+1 row selected (0.184 seconds) 4/4 Closing: Org.apache.phoenix.jdbc.PhoenixConnectionsqlline version 1.2.0
Examples/stock_symbol.csv
Aapl,apple inc.crm,salesforcegoog,googlehog,harlet-Davidson Inc.hpq,hewlett packardintc,intelmsft, Microsoftwag,walgreenswmt,walmart
2)Import Data: In addition, you can use bin/psql.py to load CSV data or execute SQL scripts. For example:
PS: Where-T is followed by the table name,. /examples/stock_symbol.csv is the CSV data (note that the delimiter for the data needs to be a comma).
3)Querying Data
[Email protected] phoenix-4.14.0-hbase-1.2]$/bin/sqlline.py 192.168.100.21,192.168.100.22,192.168.100.23:21810: Jdbc:phoenix:192.168.100.21,192.168.100.22> select * from Stock_symbol;
Query total number of bars
2. Client mode
Squirrel is the client used to connect to Phoenix, as described earlier.
Six. Phoenix's API Operation HBase1. Reference links
FAQ reference: Http://phoenix.apache.org/faq.html#What_is_the_Phoenix_JDBC_URL_syntax
Help document reference: Https://help.aliyun.com/document_detail/69105.html?spm=a2c4g.11186623.6.591.420a48cbSK2wy8
2. Preparation of the development environment
Idea, jdk1.8, WINDOW10, hadoop2.7.6, hbase1.2.6, phoenix4.14.0
1. Copy the following files from the cluster: Core-site.xml, Hbase-site.xml, hdfs-site.xml files are placed under Project SRC
2. Add Phoenix's Phoenix-4.14.0-client.jar and Phoenix-core-4.14.0.jar to the project Classpath
3. Configure the Hosts file of each node in the cluster to add the client's Hostname:ip
4. Join the hostname and IP of the cluster in the client host file
3. Example one
Package Com.xyg.phoenix;
Import java.sql.Connection;
Import Java.sql.DriverManager;
Import Java.sql.ResultSet;
Import java.sql.SQLException;
Import java.sql.Statement;
/**
* Author:Mr.Deng
* DATE:2018/8/25
* Desc:create table, CREATE INDEX, insert data, select Table.
*/
public class Testphoenixjdbc {
private static String Driver = "Org.apache.phoenix.jdbc.PhoenixDriver";
public static void Main (string[] args) throws SQLException {
Connection conn = null;
Statement stmt = null;
ResultSet rs = null;
try {
Class.forName (driver);
} catch (ClassNotFoundException e) {
E.printstacktrace ();
}
conn = Drivermanager.getconnection ("jdbc:phoenix:node21,node22,node23:2181");
stmt = Conn.createstatement ();
Stmt.execute ("drop table if exists test");
Stmt.execute ("CREATE TABLE Test (MyKey integer NOT null primary key, MyColumn varchar)");
Stmt.execute ("CREATE INDEX test_idx on test (MyColumn)");
Stmt.executeupdate ("Upsert into test values (1, ' world! ')");
Stmt.executeupdate ("Upsert into test values (2, ' Hello ')");
Stmt.executeupdate ("Upsert into test values (3, ' world! ')");
Conn.commit ();
rs = Stmt.executequery ("Select MyKey from Test where mycolumn= ' Hello '");
while (Rs.next ()) {
System.out.println (Rs.getint (1));
}
Stmt.close ();
Rs.close ();
Conn.close ();
}
}
Phoenix installation use with SQL query HBase