A few days ago, the rise wanted to play with HBase carefully, carefully to study, wrote a small demo, From Win7 to connect the hbase on another T510 Ubuntu. Very simple crud operation procedures, did not see what the problem, but run up, just like block live, do not go down, Eclipse console does not print any information, the red dot has been bright, Puzzled, and saw some
Opening socket connection to server 192.168.0.xx/192.168.0.xx:2181. Will isn't attempt to authenticate using SASL (java.lang.SecurityException: Unable to locate login configuration) error and Hadoop.native.lib is deprecated. Instead, use io.native.lib.available mistake, thought is the two cause, online find n posts, tried n times, have no clue, the result is still ....
It's strange that zookeeper are connected, but not even to HBase's master or region server. Why?
Replaced a single node pseudo-distribution on the local virtual machine Hadoop, a hbase-0.98.1-hadoop2 run, Win7 under the Eclipse plug-in connection HDFs no problem, brush/hbase directory appeared, Hbase-site.xml configuration Good hbase.zookeeper.quorum point to the new VM's local domain name, run up, pass, brush a few seconds to finish execution. Strange, after the instance on the direct VM play ....
Today, the loss of a new notebook, the memory rose to 16G, installed Ubuntu Server, the hadoop2,hbase,hive a pass is well-equipped, and then quorum point to the address of the machine, before the problem came again. Little Red Dot has not retreated, block live motionless. Strange and strange ....
Decide on the debug trace to see exactly ....
All the way is normal, very easily connected to the Zookeeper,locatemeta address, locate master location, and then to generate RPC stub completely ok ... But not all are ok .... When executed to
Rpcretryingcaller.callwithretries, block live, waiting for more than 10 seconds to prepare the fork off again look at what the problem. Italian field, to the catch code block, exception, very good, exactly what I want ....
Try// if called with false, check table status on ZKreturncatch
(Throwable t){if (log.istraceenabled ()) {Log.trace ("Call exception, tries=" + Tries + ", R Etries= "+ retries +", retrytime= "+this. globalstarttime) +" MS ", T);}
Get the message information for E in debug as follows:
org.apache.hadoop.net.connecttimeoutexception:20000 Millis Timeout while waiting for channel to is ready for connect. Ch:java.nio.channels.socketchannel[connection-pending remote=hadoop/192.168.0.105:50578]
Happy and angry coexist, it this ghost level is trace, completely did not print tell us, not to debug a back, deep buried small stone we can not find Ah, suddenly tears are not flowing ....
I am more stupid, to here, have not immediately thought why will connect to 105, 105 is my T510 machine IP, the first reaction, locate time will be the cache data written to the file, see project also did not generate any Hbaes related files, and then look back, region The domain name of the server ... Hadoop.....
Oh... View the Hosts file, which is the IP 105 mapping for Hadoop. The truth is ...
Those will not attempt to authenticate using SASL (java.lang.SecurityException: Unable to locate the login configuration) error is not critical. The
fact is this:
1.hbase connection is the key to RPC. Connection through the zookeeper to obtain the master address, also through ZK get meta address, and then through meta to obtain the address of the region server. The stub is obtained by combining the relevant operations based on these location information. Then send the network packet through the stub for remote request invocation. The location obtained is returned from the HBase server (ZK), which is the domain name or address in the cluster, and when the stub is generated, we get to the server locally to request it.
So to maintain Win7 domain name resolution and domain name resolution in the cluster to be consistent. Learning process, we still well standardize our environment,
small problem, big toss ... Anyhow, there is a little bit of harvest.