First, the requirements
Create a table with snappy compression properties in the HBase database.
Second, login to hbase database to perform the table operation
HBase (main):016:0> create ' Dcs:t_dev_history ', {NAME = ' f ', data_block_encoding = ' Prefix_tree ', bloomfilter = ' ROW ', Replication_scope = ' 0 ', VERSIONS = ' 1 ', COMPRESSION = ' SNAPPY ', min_versions = ' 0 ', TTL = ' 2678400 ', keep_deleted_cells = ' false ', BLOCKSIZE = ' 65536 ', in_memory = ' false ', Blockcache = ' True '}
Channel 6:open failed:administratively Prohibited:open failed
None of the previously created tables use snappy compression, and suspect that snappy is not installed
Third, check all hbase nodes
Master1 node Snappy installed correctly
[Email protected] lib]$ CD $HBASE _home/lib/native/linux-amd64-64/
[[email protected] linux-amd64-64]$ ls
Libhadoop.a libhadoopsnappy.so.0 libhadoop.so.1.0.0 libhdfs.so libpython2.7.so libsnappy.so.1
LIBHADOOPPIPES.A libhadoopsnappy.so.0.0.1 libhadooputils.a libhdfs.so.0.0.0 libpython2.7.so.1.0 libsnappy.so.1.2. 0
libhadoopsnappy.so libhadoop.so LIBHDFS.A libjvm.so libsnappy.so
MASTER2 node Snappy installation error, guess the SCP path error at the time of installation
[[email protected] ~]$ cd/var/lib/hbase/lib/native/--found no linux-amd64-64 directory
[[email protected] native]$ ls
Libhadoop.a libhadoopsnappy.so.0 libhadoop.so.1.0.0 libhdfs.so libpython2.7.so libsnappy.so.1
LIBHADOOPPIPES.A libhadoopsnappy.so.0.0.1 libhadooputils.a libhdfs.so.0.0.0 libpython2.7.so.1.0 libsnappy.so.1.2. 0
libhadoopsnappy.so libhadoop.so LIBHDFS.A libjvm.so
Libsnappy.so
Iv. process of processing
1. Transfer the file to the specified node
SCP-RP linux-amd64-64 hadoop-test-master2:/var/lib/hbase/lib/native/
SCP-RP linux-amd64-64 hadoop-test-node1:/var/lib/hbase/lib/native/
2. Then restart the HBase cluster
./stop-hbase.sh
./start-hbase.sh
3. Re-create the table successfully.
V. Summary
As a DBA, finish the relevant operations must be verified, must be verified, to have a rigorous attitude.
A lot of problems are not technical problems at all, but some people never use heart, muddle through.
Compression is a good way to save space in Hadoop and hbase, and it is worth advocating, especially in the case of financial constraints.
Snappy need to be installed separately, not install Hadoop comes with, need attention.
This article is from the "ROIDBA" blog, make sure to keep this source http://roidba.blog.51cto.com/12318731/1915081
Hbase cannot create a table with snappy compression properties