Hadoop source code learning notes (1) -- starting from the second season -- finding the main function and reading the configure class

Source: Internet
Author: User
Tags hadoop fs

Hadoop source code learning notes (1)

-- Find the main function and read the configure class

In the first quarter, we briefly studied what hadoop is and how to use it. Under the temptation of this open-source project, we will study how it is implemented.

I have been making a statement in advance. net, a little unfamiliar with Java, so when learning this work, it will insert the learning of Java from time to time, and it will also come up, including the design pattern and so on. Thank you for your attention.

Throughout the learning process, we mainly use eclipse to learn how to build a debugging environment in eclipse.

In the previous source code, we have found several main function portals. So here we will list a plan:

  1. Fsshell main portal: org. Apache. hadoop. fs. fsshell
  2. Namenode main entry: org. Apache. hadoop. HDFS. server. namenode. namenode
  3. Datanode main entry: org. Apache. hadoop. HDFS. server. datanode. datanode
  4. Jobtracker main entry: org. Apache. hadoop. mapred. jobtracker
  5. Tasktracker main entry: org. Apache. hadoop. mapred. tasktracker

We will study it in this order. For other projects such as secondenamenode, we will study it later.

Similarly, we will take a look at this content. The first step is to look at DFS, and the second step is to look at mapreduce.

Before studying DFS, let's take a look at the relationship between the three:

Among them, namenode is the main interface of the client and the only pair of contacts. It is also mainly responsible for file name Directory management and data datanode ing.

 

Okay. Let's take a look at it. Let's start the program first.

In eclipse, we can easily find the corresponding main function of each module, but it is still inconvenient. for debugging convenience, we create three new entry classes:

The self-built portal class is mainly used for convenience, and the code in the three classes is:

Fsshellenter. Java

  1. Import org. Apache. hadoop. fs. fsshell;
  2.  
  3. Public class fsshellenter {
  4.  
  5. Public static void main (string [] ARGs) throws exception {
  6. Fsshell. Main (New String [] {"-ls "});
  7. }
  8. }

Namnodeenter. Java

  1. Public class namenodeenter {
  2.  
  3. Public static void main (string [] ARGs) throws exception {
  4. Org. Apache. hadoop. HDFS. server. namenode. namenode. Main (ARGs );
  5. }
  6. }

Datanodeenter. Java

  1. Public class datanodeenter {
  2.  
  3. Public static void main (string [] ARGs ){
  4. Org. Apache. hadoop. HDFS. server. datanode. datanode. Main (ARGs );
  5. }
  6. }

Running:

Start the command line and run: $ bin/hadoop namenode

In eclipse, open fsshellenter. Java and click run. You can see:

 

In eclipse, open namnodeenter. Java and click Run,

In the console, you can enter a bunch of information, indicating that it is normal.

Open the command line and enter $ bin/hadoop FS-ls. You can see:

In this way, both the positive and negative operations are acceptable.

Of course, there is no file content operation involved here, so there is no problem with datanode, but you can try it on your own.

 

Open these main functions, and you can see that the configuration class is made at the beginning. So let's take a look at what this class looks like:

Let's take a look at how we used this class:

  1. Configuration conf = new configuration ();
  2. String name = Conf. Get ("fs. Default. Name ");
  3. System. Out. println (name );

From the literal meaning and this function, we can see that the configuration class is used to read the configuration file, and the program reads the value of FS. Default. name in the configuration file.

Observe its constructor:

  1. Public configuration (){
  2. This (true );
  3. }
  4. Public configuration (Boolean loaddefaults ){
  5. This. loaddefaults = loaddefaults;
  6. If (log. isdebugenabled ()){
  7. Log. debug (stringutils. stringifyexception (New ioexception ("Config ()")));
  8. }
  9. Synchronized (configuration. Class ){
  10. Registry. Put (this, null );
  11. }
  12. }

It is found that there is no operation in the current, mainly set a loaddefaults value to true.

Then observe the get function:

  1. Public String get (string name ){
  2. Return substitutevars (getprops (). getproperty (name ));
  3. }
  4. Private synchronized properties getprops (){
  5. If (properties = NULL ){
  6. Properties = new properties ();
  7. Loadresources (properties, resources, quietmode );
  8. If (overlay! = NULL)
  9. Properties. putall (overlay );
  10. }
  11. Return properties;
  12. }

The get function first calls the substitutevars function, which is a regular expression processing function that can process invalid characters in the returned value. Then, in the getprops function, it judges the properties of the hashtable type, if it is null, the system creates and performs initial activation. Otherwise, the system returns the result directly. Getproperty then takes the value based on its key value.

Obviously, the lazy loading method is used here, that is, the data in the configuration file is not loaded at first, but is loaded only when access is required.

Let's take a further look at how the initial implementation of the loadresources function:

  1. Private void loadresources (properties,
  2. Arraylist resources,
  3. Boolean quiet ){
  4. If (loaddefaults ){
  5. For (string resource: defaultresources ){
  6. Loadresource (properties, resource, quiet );
  7. }
  8.  
  9. // Support the hadoop-site.xml as a deprecated case
  10. If (getresource ("hadoop-site.xml ")! = NULL ){
  11. Loadresource (properties, "hadoop-site.xml", quiet );
  12. }
  13. }
  14.  
  15. For (Object Resource: Resources ){
  16. Loadresource (properties, resource, quiet );
  17. }
  18. }

The second line shows that the resources in defaultresources are loaded first, and then the hadoop-site.xml (row 5th) is loaded ).

Defaultresources:

  1. Static {
  2. ...
  3. Adddefaultresource ("core-default.xml ");
  4. Adddefaultresource ("core-site.xml ");
  5. }
  6. Public static synchronized void adddefaultresource (string name ){...}

From here, by default, the core-default.xml and core-site.xml files are loaded.

Here, we can open the three XML files to see:

 

From the two files, we can see that the key-value pair is stored in the configuration file, and a description is added to the description of the configuration item. Therefore, you can obtain the value after reading the key in the program.

At the same time, core-site.xml is our own configuration file, a closer look, you can find that there are some of the same configuration items in the core-defalut.xml. Load defalut before site when loading, and overwrite the former when the latter has the same key.

In other words, if hadoop. tmp. DIR is not configured, the default value is/tmp .... Directory.

At the same time, it can be similar to other hadoop configurations, you can refer to the core-default.xml in. You can directly modify it, or copy it in the core-site file.

 

Continue to observe the configuration methods:

It is found that there are many get functions, and then different types are returned. This makes it easy for us to directly process the value.

At the same time, we can see a bunch of set functions. These set functions are modified and not saved. Therefore, these functions are also visible, and configuration can work without a configuration file.

 

Hadoop source code learning notes (1) -- starting from the second season -- finding the main function and reading the configure class

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.