Reprinted: http://blog.csdn.net/hxpjava1/article/details/20043703
Environment:Hadoop: hadoop-2.2.0Hbase: hbase-0.96.0.1.org. Apache. hadoop. hbase. Client. Put0.94.6 public class put extends mutation implements heapsize, writable, comparable 0.96.0 public class put extends mutation implements heapsize, comparable Solution:By public class monthuserlogintimeindexreducer extends reducer Change public class mon
and Hive run on the same machine, so I point to the local machine.
2. Start HBase/Work/hbase/bin/hbase master start
3. Start Zookeeper/Work/zookeeper/bin/zkServer. sh start
Iii. ExecutionCreate a table in Hive and associate it with each otherCreate table hbase_table_1 (key int, value string) stored by 'org. apache. hadoop. hive.
the context of online transaction processing, low latency is required, while Phoenix does some optimizations when querying hbase, but the latency is still not small. So it is still used in Olat, and then the results are returned to store.Phoenix official online, the explanation of Phoenix has been very good. If English is good, can crossing net, more formal some.Phoenix Installation1, download Phoenix. Phoenix and
schematic diagram using HBase as a storage system:
Among them, the HBase service end refers to the HBase cluster, the application respectively writes and reads the hbase through the storage end and the query end.
From the HBase application point of view, can be divided in
the key.
The following is a structure using hbase as a storage system:
Hbase server refers to hbase cluster and application.ProgramWrite and read hbase on the receiving end and query end respectively.
From the hbase appl
I started zookeeper after the start of HDFs, the last time to start hbase, found that the JPS process does not show the hmaster process, as follows:
[hadoop@hadoop000 bin]$ JPS
3936 NameNode
4241 secondarynamenode 6561
JPS
4041 DataNode
3418 Quorumpeermain
Then./hbase shell, found the following error:
This is my previous Hbase-site.xml configuration file:
In
The following error messages are executed
2016-12-09 19:38:17,672 ERROR [main] client. Connectionmanager$hconnectionimplementation:the Node/hbase is not in ZooKeeper. It should has been written by the master. Check the value configured in ' Zookeeper.znode.parent '.
There could is a mismatch with the one configured in the master. 2016-12-09 19:38:17,779 ERROR [main] client. Connectionmanager$hconnectionimplementation:the Node/
the performance requirements for range retrieval are not high, then you can avoid redundant data, consistency issues can be indirectly avoided, after all, share nothing is recognized as the simplest and most effective solution.Theory and practice, the following will be an example of how the various options to choose the focus.These programs are through the author's data review and colleagues of the continuous exchange of conclusions, if there are errors, please correct:1, build table by indexEa
scenarios.If you do not require efficient range retrieval, you can avoid generating redundant data and avoid consistency issues indirectly. After all, share nothing is recognized as the simplest and most effective solution.
Based on the theory and practice, the following describes how to choose the preferred solution as an example.These solutions come to the conclusion after reading the author's information and continuing communication with colleagues. If you have any mistakes, please correct t
Two months ago used HBase, now the most basic commands are forgotten, leave a reference ~
Go into hbase shell console$HBASE _home/bin/hbase ShellIf you have Kerberos authentication, you need to use the appropriate keytab for authentication (using the Kinit command), and then use the
The problem is described in detail below:2016-12-09 15:10:39,160 ERROR [org.apache.hadoop.hbase.client.connectionmanager$hconnectionimplementation]- The node/hbase is not in ZooKeeper. It should has been written by the master. Check the value configured in ' Zookeeper.znode.parent '. There could is a mismatch with the one configured in the master.2016-12-09 15:10:39,264 ERROR [Org.apache.hadoop.hbase.client.connectionmanager$hconnectionimplementation
how hbase is accessed1. Native Java API: The most conventional and efficient way to access;2, hbase shell:hbase command line tool, the simplest interface, suitable for hbase management use;3, Thrift Gateway: The use of Thrift serialization technology, support C++,php,python and other languages, suitable for other heterogeneous systems online access
Ganglia is an open-source monitoring project initiated by UC Berkeley designed to measure thousands of nodes. Each computer runs a gmond daemon that collects and sends metric data (such as processor speed and memory usage. It is collected from the operating system and the specified host. Hosts that receive all metric data can display the data and pass the simplified form of the data to the hierarchy. Ganglia can be well expanded just because of this hierarchical
, then can not consider the generation of redundant data, consistency problems can also be indirectly avoided, after all, share nothing is recognized as the simplest and most effective solution.Theory and practice, the following will be an example of how the various options to choose the focus.These programs are through the author's data review and colleagues of the continuous exchange of conclusions, if there are errors, please correct:1, build table by indexEach index builds a table and then r
HBase provides a shell terminal to interact with the user. Use the command Hbaseshell to enter the command interface. You can see the Help information for the command by performing a helper. Demonstrate the use of hbase with an example of an online Student score table.
Name
Grad
Course
Math
Art
Tom
5
97
87
Jim
4
data, especially in distributed scenarios.If the performance requirements for range retrieval are not high, then you can avoid redundant data, consistency issues can be indirectly avoided, after all, share nothing is recognized as the simplest and most effective solution.Theory and practice, the following will be an example of how the various options to choose the focus.These programs are through the author's data review and colleagues of the continuous exchange of conclusions, if there are err
region to region server 2 responsible for load balancing of Region server 3 Find the failed region server and reassign the region on it 4 Garbage file collection on GFs 5 Working with Schema update requests Region Server 1 Region server maintains the region that master assigns to it, processing IO requests to these region 2 Region server is responsible for slicing the region that has become too large during operation As you can see, the process of client access to data on
Data import./hbase org.apache.hadoop.hbase.mapreduce.Driver Import table name data file locationHdfsThe data file location can be prefixed file:///Otherwise, the HDFs address is accessed.Data export./hbase org.apache.hadoop.hbase.mapreduce.Driver Export table name data file locationEnter the shell command.cd/hbasehome/bin/CD./hbase Shell2016-05-20 15:36:32,370 IN
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.