[Spark Asia Pacific Research Institute Series] the path to spark practice-Chapter 1 building a spark cluster (step 2) (1)

Source: Internet
Author: User

    1. Prepare the second and third machines running the Ubuntu System in VMware;

 

Building the second and third machines running Ubuntu in VMware is exactly the same as building the first machine ..

Different from installing the first Ubuntu machine:

First: we name the second and third Ubuntu machines as slave1 and slave2, as shown in:

There are three VMS in the created VMware:

Second: to simplify the hadoop configuration and minimize the number of hadoop clusters, log on to the system using the same root Super User mode when building the second and third machines.

 

2. Configure the machines that run the Ubuntu System in pseudo-distributed mode;

Configure the machines that run the Ubuntu System in the pseudo-distributed mode and configure the first machine in the same way,

After Jia Lin is fully installed:

3. Configure the hadoop distributed Cluster Environment ;

According to the previous configuration, we now have three machines running on the Ubuntu System in VMware: Master, slave1, and slave2;

Configure the hadoop distributed cluster environment as follows:

Step 1: Modify the host name in/etc/hostname and configure the ing between the host name and IP address in/etc/hosts:

We use the master machine as the master node of hadoop. First, let's take a look at the IP address of the master machine:

The IP address of the current host is "192.168.184.20 ".

Modify the host name in/etc/hostname:

Enter the configuration file:


We can see the default name when installing ubuntu. The name of the machine in the configuration file is "Rocky-Virtual-machine ", change "Rocky-Virtual-Machine" to "master" as the master node of the hadoop distributed cluster environment:

Save and exit. Run the following command to view the Host Name of the current host:

The modified host name does not take effect. To make the new host name take effect, restart the system and check the host name again:

If the host name is changed to "master", the modification is successful.

Open the/etc/hosts file:

In this case, we find that only the original IP address (127.0.0.1) of the Ubuntu system corresponds to the Host Name (localhost) in the file:

In/etc/hosts, configure the ing between the host name and IP Address:

Save the modification and exit.

 

Next, run the ping command to check whether the Conversion Relationship between the host name and the IP address is correct:

We can see that the IP address corresponding to the "master" of our host is "192.168.184.small", which indicates that our configuration and operation are correct.

[Spark Asia Pacific Research Institute Series] the path to spark practice-Chapter 1 building a spark cluster (step 2) (1)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.