How do I divide partition when querying Phoenix data with mapreduce/hive?
PhoenixInputFormat
The source code at a glance to know:
public list<inputsplit > getsplits (jobcontext context) throws IOException, interruptedexception {Configuration configuration = Context. (); Queryplan Queryplan = this . getqueryplan (context, configuration); List allsplits = Queryplan. getsplits (); List splits = this . generatesplits (Queryplan, allsplits); return splits; }
Create a query plan based on the SELECT query statement, Queryplan, which is actually a subclass of Scanplan. The getQueryPlan
function has a special operation:
queryPlan.iterator(MapReduceParallelScanGrouper.getInstance());
If the HBase table has more than one region, it Scan
divides one into multiple, and each region corresponds to a split. This logic is similar to Mr on HBase. Just this way the implementation process is different, this is called the Phoenix Queryplan, not the HBase API.
Here is an example that deepens the understanding of this process.
Phoenix Build Table
Table Presplit to 4 Region:[-∞,cs), [CS, EU), [EU, na], [Na, +∞]
create table TEST (HOST varchar not null primary key , DESCRIPTION varchar ) split on (, ' EU ' , ); Upsert into Test (host, description) values ( ' CS11 ' , ' CCCCCCCC ' ); Upsert into Test (host, description) values ( ' EU11 ' , ' eeeddddddddd ' ) upsert into Test (host, description) values ( ' NA11 ' , ' nnnnneeeddddddddd ' );
0: jdbc:phoenix:localhost> select * from test;+-------+--------------------+| HOST | DESCRIPTION |+-------+--------------------+| CS11 | cccccccc || EU11 | eeeddddddddd || NA11 | nnnnneeeddddddddd |+-------+--------------------+
Spy on Scanplan
import Org.apache.hadoop.hbase.client.Scan;import Org.apache.log4j.BasicConfigurator;import Org.apache.phoenix.compile.QueryPlan;import Org.apache.phoenix.iterate.MapReduceParallelScanGrouper;import org.apache.phoenix.jdbc.PhoenixStatement;import java.io.IOException;import java.sql.*;import java.util.List; Public classLocalphoenix { Public Static void Main(string[] args)throwsSQLException, IOException {basicconfigurator.Configure(); Statement stmt =NULL; ResultSet rs =NULL; Connection con = DriverManager.getconnection("Jdbc:phoenix:localhost:2181:/hbase"); stmt = con.createstatement(); Phoenixstatement pstmt = (phoenixstatement) stmt; Queryplan Queryplan = pstmt.Optimizequery("SELECT * from TEST"); Queryplan.iterator(Mapreduceparallelscangrouper.getinstance()); Scan scan = Queryplan.GetContext().Getscan(); List<list<scan>> scans = Queryplan.Getscans(); for(list<scan> Sl:scans) {System. out.println(); for(Scan S:SL) {System. out.Print(s); }} con.Close(); }}
4 Scan as follows:
{"Loadcolumnfamiliesondemand": null, "StartRow": "", "Stoprow": "CS", "Batch":-1, "cacheblocks": true, " Totalcolumns ": 1," Maxresultsize ":-1," families ": {" 0 ": [" All "]}," caching ": +," maxversions ": 1," Timerange ": [ 0,1523338217847]}{"Loadcolumnfamiliesondemand": null, "StartRow": "CS", "Stoprow": "EU", "Batch":-1, "cacheblocks": True, "Totalcolumns": 1, "Maxresultsize":-1, "families": {"0": ["All"]}, "caching": +, "maxversions": 1, "Timerange": [ 0,1523338217847]}{"Loadcolumnfamiliesondemand": null, "StartRow": "EU", "Stoprow": "NA", "Batch":-1, "cacheblocks": True, "Totalcolumns": 1, "Maxresultsize":-1, "families": {"0": ["All"]}, "caching": +, "maxversions": 1, "Timerange": [ 0,1523338217847]}{"Loadcolumnfamiliesondemand": null, "StartRow": "NA", "Stoprow": "", "Batch":-1, "Cacheblocks": True , "Totalcolumns": 1, "Maxresultsize":-1, "families": {"0": ["All"]}, "caching": +, "maxversions": 1, "Timerange": [ 0,1523338217847]}disconnected from the target VM, address: ' 127.0.0.1:63406 ', Transport: ' Socket '
Mapreduce atop Apache Phoenix (Scanplan)