Importtsv-hbase Data Import Tool

Source: Internet
Author: User

I. Overview

HBase officially provides a mapreduce-based bulk data import Tool: Bulk load and IMPORTTSV. About bulk load you can take a look at my other blog post.

HBase users typically use the HBase API derivative, but if you import large batches of data at one time, potentially consuming a large amount of regionserver resources, affecting queries stored on other tables on that regionserver, this article will parse the IMPORTTSV data import tool from the source To explore how to efficiently import data into hbase.


ii. introduction of IMPORTTSV

IMPORTTSV is a command-line tool provided by HBase that allows you to import data files of custom separators (the default \ t) stored on HDFS into an hbase table with a single command, which is useful for large data import There are two ways to import data into an HBase table:

The first is to use Tableoutputformat to insert data into reduce;

The second type is Mr. Hfile file, then executes a command called Completebulkload, moves the file to the HBase table space directory, and provides the client query.


Third, source code analysis

The entry class of this paper based on CDH5 HBASE0.98.1,IMPORTTSV is org.apache.hadoop.hbase.mapreduce.ImportTsv

String Hfileoutpath = Conf.get (Bulk_output_conf_key);    String columns[] = conf.getstrings (Columns_conf_key), if (hfileoutpath! = null) {if (!admin.tableexists (TableName)) {    Log.warn (Format ("Table '%s ' does not exist.", TableName)); Todo:this is backwards. Instead of depending on the existence of a table,//Create a sane splits file for Hfileoutputformat based on data samp    Ling.  CreateTable (admin, tableName, columns);  } htable table = new htable (conf, tableName);  Job.setreducerclass (Putsortreducer.class);  Path OutputDir = new Path (Hfileoutpath);  Fileoutputformat.setoutputpath (Job, OutputDir);  Job.setmapoutputkeyclass (Immutablebyteswritable.class);    if (Mapperclass.equals (Tsvimportertextmapper.class)) {Job.setmapoutputvalueclass (text.class);  Job.setreducerclass (Textsortreducer.class);    } else {job.setmapoutputvalueclass (put.class);  Job.setcombinerclass (Putcombiner.class); } hfileoutputformat.configureincrementalload (Job, table);} else {if (MAPPERCLASs.equals (Tsvimportertextmapper.class)) {usage (TsvImporterTextMapper.class.toString () + "should not is used fo R non bulkloading case.    Use "+ TsvImporterMapper.class.toString () +" or custom mapper whose value type is Put. ");  System.exit (-1); }//No reducers. Just write straight to table.  Call Inittablereducerjob//To set up the Tableoutputformat.  Tablemapreduceutil.inittablereducerjob (tableName, NULL, job); Job.setnumreducetasks (0);}

Judging from the Importtsv.createsubmittablejob method, the parameter bulk_output_conf_key starts, This step directly affects how IMPORTTSV's mapreduce operations end up in the HBase library

If it is not empty and the user does not have a custom Mapper implementation class (parameter importtsv.mapper.class), the putsortreducer is used, where the put is sorted, and if there are many column records per row, It consumes reducer a large amount of memory resources to sort.

Configuration conf = job.getconfiguration (); Hbaseconfiguration.merge (Conf, hbaseconfiguration.create (CONF)); Job.setoutputformatclass ( Tableoutputformat.class);
if it is empty, Call Tablemapreduceutil.inittablereducerjob to initialize the reducer output of Tableoutputformat, which does not need to use reducer, because the put is called in bulk in mapper OutputFormat API submits data to Regionserver (equivalent to executing hbase Put API in parallel)


Four, actual combat

1. Uploading data using Tableoutputformat's put API, non-bulk-loading

$ bin/hbase Org.apache.hadoop.hbase.mapreduce.importtsv-dimporttsv.columns=a,b,c <tablename> < Hdfs-inputdir>
2. Using bulk-loading to generate Storefiles (hfile)

Step1, generating hfile

$ bin/hbase org.apache.hadoop.hbase.mapreduce.importtsv-dimporttsv.columns=a,b,c-dimporttsv.bulk.output=hdfs:// Storefile-outputdir <tablename> 
Step2, completing the import

$ bin/hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 

v. Summary

When using IMPORTTSV, be sure to pay attention to the configuration of the parameter importtsv.bulk.output , which is generally more friendly to regionserver using bulk output. Loading data in this way hardly consumes regionserver's compute resources, because it only moves the hfile file on HDFs, and then notifies hmaster that one or more of the regionserver will be on line.

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Importtsv-hbase Data Import Tool

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.