A program bug caused by improper use of hbasecoprocessor

Source: Internet
Author: User
In a system, a large amount of data is written to a table, resulting in low efficiency due to frequent compactions. This table has already been presharding and has several hundred region entries. For some reason, it is not allowed to increase the number of region in the short term. The method used at that time was to generate a table every hour, and only the corresponding table is written for each hour. Later, we found that these 24 tables are

In a system, a large amount of data is written to a table, resulting in low efficiency due to frequent compactions. This table has already been presharding and has several hundred region entries. For some reason, it is not allowed to increase the number of region in the short term. The method used at that time was to generate a table every hour, and only the corresponding table is written for each hour. Later, we found that these 24 tables are

In a system, a large amount of data is written to a table, resulting in low efficiency due to frequent compactions. This table has already been presharding and has several hundred region entries. For some reason, it is not allowed to increase the number of region in the short term. The method used at that time was to generate a table every hour, and only the corresponding table is written for each hour. Later, we found that the 24 tables had a great deal of trouble for subsequent business processing. The 24 tables need to be combined into one table, so a DisableRegionCompaction is written to disable compaction for the data before the specified time.

See the official introduction of hbase coprocessor (https://blogs.apache.org/hbase/entry/coprocessor_introduction ). Hbase coprocessor can be divided into two types: observer and endpoint. coprocessor is similar to the trigger of traditional databases, and endpoint is similar to the stored procedure. There are three types of observer: RegionObserver, WALObserver, and MasterObserver.

RegionObserver: Provides hooks for data manipulation events, Get, Put, Delete, Scan, and so on. there is an instance of a RegionObserver coprocessor for every table region and the scope of the observations they can make is constrained to that region.

WALObserver: Provides hooks for write-ahead log (WAL) related operations. this is a way to observe or intercept WAL writing and reconstruction events. A WALObserver runs in the context of WAL processing. there is one such context per region server.

MasterObserver: Provides hooks for DDL-type operation, I. e., create, delete, modify table, etc. The MasterObserver runs within the context of the HBase master.

To control the compaction behavior of hbase tables, theoretically, you only need to write a RegionObserver coprocessor for region. So I wrote a DisableRegionCompaction class, which implements the RegionObserver interface class and overwrites the preCompactSelection interface. Other interfaces use the code automatically generated by eclipse.

Public void preCompactSelection (ObserverContext c, Store store, List candidates) {// stored in candidates is the main task of all StoreFile programs that are candidates for compaction: remove (remove) The StoreFile from candidates one hour ago and do not participate in compaction}

Data loss occurs during the test. The data in is four records, and the hfile has four files:

In the figure, this table has four hfiles, which are intended to prevent the two hfiles at from being involved in compaction. The remaining two are merged.

The phenomenon is that after major_compact, the region data removed from the preCompactSelection code (two hfiles at) exists, and the data in the StoreFile participating in the compaction (two at and) exists) all are lost!

View the log on region server:

It is found that two storefiles are involved in compaction, but the result data is null.

Check the hbase 0.94.1 code and find that the result returned by compactStore () of org/apache/hadoop/hbase/regionserver/Store. java is null.

In compactStore () code, it is most likely that these lines have problems:

        /* include deletes, unless we are doing a major compaction */        scanner = new StoreScanner(this, scan, scanners,            majorCompaction ? ScanType.MAJOR_COMPACT : ScanType.MINOR_COMPACT,            smallestReadPoint, earliestPutTs);        if (region.getCoprocessorHost() != null) {          InternalScanner cpScanner = region.getCoprocessorHost().preCompact(              this, scanner);          // NULL scanner returned from coprocessor hooks means skip normal processing          if (cpScanner == null) {            return null;          }          scanner = cpScanner;        }

I think preCompact also has a coprocessor interface, so I found that the DisableRegionCompaction code I wrote (automatically generated by eclipse) is written in this way:

public InternalScanner preCompact(           ObserverContext c, Store store,           InternalScanner scanner) {       // TODO Auto-generated method stub       return null;    }

This is the problem. A null response is returned and the input response is returned, because the preCompact interface is not required here.

In fact, the preCompact interface is defined in the RegionObserver interface:

  /**   * Called prior to writing the {@link StoreFile}s selected for compaction into   * a new {@code StoreFile}.  To override or modify the compaction process,   * implementing classes have two options:   *   *Wrap the provided {@link InternalScanner} with a custom   *   implementation that is returned from this method.  The custom scanner   *   can then inspect {@link KeyValue}s from the wrapped scanner, applying   *   its own policy to what gets written.*Call {@link org.apache.hadoop.hbase.coprocessor.ObserverContext#bypass()}   *   and provide a custom implementation for writing of new   *   {@link StoreFile}s.  Note: any implementations bypassing   *   core compaction using this approach must write out new store files   *   themselves or the existing data will no longer be available after   *   compaction.** @param c the environment provided by the region server   * @param store the store being compacted   * @param scanner the scanner over existing data used in the store file   * rewriting   * @return the scanner to use during compaction.  Should not be {@code null}   * unless the implementation is writing new store files on its own.   * @throws IOException if an error occurred on the coprocessor   */  InternalScanner preCompact(final ObserverContext c,      final Store store, final InternalScanner scanner) throws IOException;

There is a description of the returned value "@ return the values to use during compaction. shocould not be {@ code null} unless the implementation is writing new store files on its own ."

After carefully reading the hbase code, we found that hbase already has an abstract class BaseRegionObserver that implements the RegionObserver interface. Its implementation is:

  @Override  public InternalScanner preCompact(ObserverContext e,      final Store store, final InternalScanner scanner) throws IOException {    return scanner;  }

Therefore, you can directly inherit the abstract class BaseRegionObserver in the code.

Description of the BaseRegionObserver class in the hbase official documentation (https://blogs.apache.org/hbase/entry/coprocessor_introduction) is:

We provide a convenient abstract class BaseRegionObserver, which implements all RegionObserver methods with default behaviors, so you can focus on what events you have interest in, without having to be concerned about process upcils for all of them.

It seems to be a low-level error of improper use of the interface. You should read more hbase official documents.

As a Daniel said:

A well-designed system generally provides abstract classes for interface classes that contain many interfaces.

Original article address: A program bug caused by improper use of hbase coprocessor, thanks to the original author for sharing.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.