HBase1.0.0版源碼分析之HMaster啟動程式碼分析(2),hbase1.0.0hmaster

來源:互聯網
上載者:User

HBase1.0.0版源碼分析之HMaster啟動程式碼分析(2),hbase1.0.0hmaster

在上一片部落格中My Code分析是到startMaster這個核心的啟動函數,本文主要分析具體的HMaster構造過程中所涉及的相應組件以及服務的啟動,這篇文章也主要是從流程上進行分析,具體的每個部分的啟動過程稍後的文章將會詳細分析,主要包括的幾個核心過程有:RPC服務的建立,zookeeper叢集管理類的初始化,各種背景工作執行緒的啟動等

在介紹的開始,有必要瞭解一下HMaster的繼承體系,如:

接下來具體的進行啟動流程的分析:
1.啟動代碼

logProcessInfo(getConf());//將系統的回合組態參數以及JVM的狀態存到日誌中CoordinatedStateManager csm =        CoordinatedStateManagerFactory.getCoordinatedStateManager(conf);HMaster master = HMaster.constructMaster(masterClass, conf, csm);
2.csm對象的構造,該函數中conf.getClass的函數意義比較蛋疼,此處代碼錶示的意思是我們需要擷取一個協同狀態管理類,三個參數分別表示:1.自訂的類的名字(用於擷取相應的class檔案),2.表示預設的HBase的協同管理類ZKCoordinatedStateManager,3.用於驗證所擷取的class是不是從CoordinatedManager繼承的.預設的class是通過zookeeper實現CoordinatedManager對HBase的叢集進行管理
然後通過反射機制形成類的執行個體
public static CoordinatedStateManager getCoordinatedStateManager(Configuration conf) {  Class<? extends CoordinatedStateManager> coordinatedStateMgrKlass =    conf.getClass(HConstants.HBASE_COORDINATED_STATE_MANAGER_CLASS,      ZkCoordinatedStateManager.class, CoordinatedStateManager.class);  return ReflectionUtils.newInstance(coordinatedStateMgrKlass, conf);}
3.接下來就是具體的HMaster的對象的構造過程,
/** * Utility for constructing an instance of the passed HMaster class. * @param masterClass * @param conf * @return HMaster instance. */public static HMaster constructMaster(Class<? extends HMaster> masterClass,    final Configuration conf, final CoordinatedStateManager cp)  {  try {    Constructor<? extends HMaster> c =      masterClass.getConstructor(Configuration.class, CoordinatedStateManager.class);    return c.newInstance(conf, cp);  }
這裡使用Configuration和CoordinatedStateManager為參數的建構函式進行構造,但是這裡為什麼需要使用反射??是為了更好的通過傳入類型資訊增加程式的可拓展性嗎,可是如果增加可擴充性的化還是需要修改調用之處的原始碼啊?

接下來我們再來看看這個建構函式:
先調用父類的建構函式:(photo here for related class diagram)
super(conf, csm);
進行各種參數變數的賦值操作,這裡有幾個關鍵的步驟
(1)建立RPC的服務

rpcServices = createRpcServices();
(2)串連Zookeeper叢集
// Open connection to zookeeper and set primary watcherzooKeeper = new ZooKeeperWatcher(conf, getProcessName() + ":" +        rpcServices.isa.getPort(), this, canCreateBaseZNode());
(3)建立檔案系統操作執行個體
this.fs = new HFileSystem(this.conf, useHBaseChecksum);
(4)初始化CoordinatedStateManager對象
this.csm = (BaseCoordinatedStateManager) csm;this.csm.initialize(this);this.csm.start();

(5)建立各種叢集跟蹤和管理對象

masterAddressTracker = new MasterAddressTracker(getZooKeeper(), this);masterAddressTracker.start();clusterStatusTracker = new ClusterStatusTracker(zooKeeper, this);clusterStatusTracker.start();this.configurationManager = new ConfigurationManager();

最後啟動rpc的服務and ui:
rpcServices.start();
putUpWebUI()
至此,父類HRegion的執行個體化過程結束,轉入到HMaster拓展部分的執行個體化
HMaster的執行個體化部分比較複雜,這裡也就幾個關鍵的步驟進行分析

(1)啟用一個工作HMaster
activeMasterManager = new ActiveMasterManager(zooKeeper, this.serverName, this);//註冊相應的叢集觀察者ActiveMasterManager(ZooKeeperWatcher watcher, ServerName sn, Server master) {  super(watcher);  watcher.registerListener(this);  this.sn = sn;  this.master = master;}

(2)啟動Jetty服務

int infoPort = putUpJettyServer();2015-03-23 13:40:49,143 INFO [main] http.HttpRequestLog: Http request log for http.requests.master is not defined2015-03-23 13:40:49,366 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)2015-03-23 13:40:49,423 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master2015-03-23 13:40:49,425 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static2015-03-23 13:40:49,812 INFO [main] http.HttpServer: Jetty bound to port 16030

(3)啟用Master,startActiveMasterManager(infoPort);
這裡面有幾個子步驟,在startActiveMasterManager中首先在Zookeeper中添加一個backupZNode,等到變成activeMaster 之後顯式刪除該節點
master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/xiaoyi-PC,52777,1427089138770 from backup master directory
啟用之後,調用finishActiveMasterInitialization(status);完成Master相應的背景工作執行緒的啟動過程

(4)建立叢集連結

setupClusterConnection();

(5)初始化Zookeeper叢集的trackers
initializeZKBasedSystemTrackers();

(6)啟動各種工作服務線程

startServiceThreads();// Start the executor service poolsthis.service.startExecutorService(ExecutorType.MASTER_OPEN_REGION,        conf.getInt("hbase.master.executor.openregion.threads", 5));this.service.startExecutorService(ExecutorType.MASTER_CLOSE_REGION,        conf.getInt("hbase.master.executor.closeregion.threads", 5));this.service.startExecutorService(ExecutorType.MASTER_SERVER_OPERATIONS,        conf.getInt("hbase.master.executor.serverops.threads", 5));this.service.startExecutorService(ExecutorType.MASTER_META_SERVER_OPERATIONS,        conf.getInt("hbase.master.executor.serverops.threads", 5));this.service.startExecutorService(ExecutorType.M_LOG_REPLAY_OPS,        conf.getInt("hbase.master.executor.logreplayops.threads", 10));
等待RegionServer的執行個體加入其管理叢集
this.serverManager.waitForRegionServers(status);activeMasterManager] master.ServerManager: Waiting for region servers count to settle;

當配置數量的regionservers都加入叢集之後叢集的初始化工作就完成了,
接下來的一個重量級組件就是LoadBalancer,其主要負責regions在HRegions之間的分配
檢查Hbase的meta是否已經分配,
最後啟動幾個後台線程進行相應的監控處理,至此HMaster的初始化工作就完全完成了
// Start balancer and meta catalog janitor after meta and regions have// been assigned.status.setStatus("Starting balancer and catalog janitor");this.clusterStatusChore = new ClusterStatusChore(this, balancer);Threads.setDaemonThreadRunning(clusterStatusChore.getThread());this.balancerChore = new BalancerChore(this);Threads.setDaemonThreadRunning(balancerChore.getThread());this.catalogJanitorChore = new CatalogJanitor(this, this);Threads.setDaemonThreadRunning(catalogJanitorChore.getThread());


相關文章

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.