Hadoop self-study note (5) configure the distributed Hadoop Environment

Source: Internet
Author: User

In the previous lesson, we talked about how to build a Hadoop environment on a machine. We only configured one NHName Node, which contains all of our Hadoop stuff, including Name Node, secondary Name Node, Job Tracker, and Task Tracker. This section describes how to place the preceding configurations on different machines to build a distributed hadoop configuration.


1. Overview of hadoop distributed Installation

A) 2-10 nodes: Name Node, Job Tracker, and Secondary Name Node can all be placed on one machine, and all Data nodes and Task Tracker can be placed on other machines.

B) 10-40 nodes: the Secondary Name Node can be separated.

C) 100 + nodes: All nodes and Trackers are open, and rack awareness Support is added, and various optimization settings are required.


The process of this course:

Configure ssh to all machines to avoid password connection (as described in the previous lesson)

Configure masters and slaves

Configure all *-site files

Learn how to use commands to start, control, and disable Hadoop (Common scripts are described as follows ).


2. Configure Hadoop to 2-10 nodes

This figure is very handsome. We control all Hadoop machines on the HN Client machine, and each machine provides a window (we have linked to each machine through ssh. For the link method, see the previous lesson ).


Step 1: Cancel ssh password access on all hosts

Ssh-copy-id-I $ HOME/. ssh/id-rsa.pub nuggetuser @ HNData1

Copy this file to all HNData and Secondary Name nodes. In this way, you can log on without a password.


Step 2: Configure Master and Slaves

All configuration files are under the/usr/local/hadoop/conf folder.

Configure masters to point to Secondary Name Node, and then configure the slaves file to point to all HNData nodes

The default value of the Master file is localhost.

Open the masters file in any editor, delete localhost, and enter HN2ndName (that is, the Name of your Secondary Name Node)

Similarly, edit the slaves file and input the names of all HNData nodes.


Step 3: configure all Data nodes to point to the Name Node, and all Task trackers to the Job Tracker.

Configure the former through core-site.xml, configure the latter through mapred-site.xml

Configure the core-site.xml in HNData Node as follows (because we copied the previous machine configuration directly, we can find that this file has been configured, as shown below :)


Configure mapred-site.xml as follows:



The above configuration should already be like this, but it is best to check whether the configuration in each Data Node is like this


Step 4: reformat Name Node

Hadoop namenode-format

Step 5: The configuration is complete. You can try to see if it can be started.

Start-dfs.sh this command starts all the Name Nodes and Data Nodes, you can use the jps command to check whether the start is successful.


Start-mapred.sh this command to start all the Job Trackers and Task Trackers, also use jps to detect whether started, if not successful, you can look at the logs file

 

 

3. Start and close commands for Hadoop

To delete a node, you can create an excludes file and enter a node name you do not want, such as HNData3.

Then configure the core-site.xml in the hn Name Node as follows (add a property at the end)


You can also create a pair of des files to specify which nodes are included.

After the configuration is complete, enable the Configuration:

Hadoop dfsadmin-refreshNodes

We can see the excluded Node on hnname: 50070.


Run the rebalancer command

Start-balancer.sh

 

Disable Job Tracker. Task Tracker:

Stop-mapred.sh

 

Disable Name Node, Data Nodes:

Stop-dfs.sh


 

If you want to start HNName Node, Data Node, Job Tracker, and Task Tracker at the same time, enter:

Start-all.sh



Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.