[Spark Asia Pacific Research Institute Series] the path to spark practice-Chapter 1 building a spark cluster (step 5) (4)

Source: Internet
Author: User
Tags scp command

7. perform the same hadoop 2.2.0 operations on sparkworker1 and sparkworker2 as sparkmaster. We recommend that you use the SCP command to copy the hadoop content installed and configured on sparkmaster to sparkworker1 and sparkworker2;

8. Start and verify the hadoop distributed Cluster

Step 1: format the HDFS File System:

Step 2: Start HDFS in sbin and execute the following command:

The startup process is as follows:

At this point, we found that namenode and secondarynamenode are started on sparkmaster;

Datanode is enabled on sparkworker1 and sparkworker2:

Access http: // sparkmaster: 50070 to log on to Web Control and view the HDFS cluster status:

 

[Spark Asia Pacific Research Institute Series] the path to spark practice-Chapter 1 building a spark cluster (step 5) (4)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.