HMaster startup code analysis for source code analysis of HBase1.0.0 (1), hbase1.0.0hmaster
This article is not really about startup code parsing. This article mainly analyzes the startHMaster part from the startup process. This article will introduce HBase's pseudo-distributed debugging method for the first time.
After we import the source code to Intellij IDE, we will get the following code structure:
Here we go to hbase-server to add hadoop-metrics2-hbase.properties, hbase-site.xml, log4j in resources under src/main. properties and other files and the corresponding configuration, except the hbase-site.xml files can be copied directly from the conf directory, my hbase-site.xml simple configuration is as follows:
<property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:9000/hbase</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/opt/zookeeper/data</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>localhost</value> </property> <property> <name>hbase.defaults.for.version.skip</name> <value>true</value> </property>
After the configuration is complete, you can start HMaster and HRegionServer to debug the code. The preliminary work of source code configuration debugging will be introduced here.
Before introducing the HMaster startup process, let's take a look at the dependencies between key classes involved in the process startup, as shown in:
1. There is a main method in HMaster, which is the start point of HMaster startup, as shown below:
public static void main(String [] args) { VersionInfo.logVersion(); //System.out.println("########################################HMaster started!"); new HMasterCommandLine(HMaster.class).doMain(args); //System.out.println("HMaster stoped!"); }
It can be seen from the code that the parameters required for startup need to be passed in. The usage of the parameters is as follows:
[Opts] start | stop | clear
Start: start the Master. If it is in local mode, start Mater and RegionServer on the same JVM.
Stop: Start to shut down the cluster. The Master sends a shutdown signal to the RegionServer.
Clear: delete nodes in Zookeeper after the master crashes.
[Opts]: -- minRegionServers = <servers> the minimum number of RegionServers that can accommodate user tables,
-- LocalRegionServers = <servers> Number of Regionsrevers started in the master process in Local Mode
-- Masters = <servers> Number of masters in the process
-- Start backup mode
2. Next let's take a look at the work of the doMain method of HMasterCommandLine, as follows:
public void doMain(String args[]) { try { int ret = ToolRunner.run(HBaseConfiguration.create(), this, args); if (ret != 0) { System.exit(ret); } } catch (Exception e) { LOG.error("Failed to run", e); System.exit(-1); } }
public static int run(Configuration conf, Tool tool, String[] args) throws Exception{ if(conf == null) { conf = new Configuration(); } GenericOptionsParser parser = new GenericOptionsParser(conf, args); //set the configuration back, so that Tool can configure itself tool.setConf(conf); //get the args w/o generic hadoop args String[] toolArgs = parser.getRemainingArgs(); return tool.run(toolArgs); }
The CommandLIne method calls the run method of ToolRunner, and then parses some command parameters and calls the run method implemented by HMaserCommandline. This method mainly configures and parses some parameters, and call different processing methods for the parameters passed in the Command, as shown below:
String command = remainingArgs.get(0); if ("start".equals(command)) { return startMaster(); } else if ("stop".equals(command)) { return stopMaster(); } else if ("clear".equals(command)) { return (ZNodeClearer.clear(getConf()) ? 0 : 1); } else { usage("Invalid command: " + command); return 1; }
So far, we can see the specific Master startup call. The startMaster startup code is complicated and I will introduce it in the next article.