Spark tutorial-Build a spark cluster-configure the hadoop pseudo distribution mode and run the wordcount example (1)

Source: Internet
Author: User
Tags hadoop mapreduce
Step 4: configure the hadoop pseudo distribution mode and run the wordcount example

The pseudo-distribution mode mainly involves the following configuration information:

  1. Modify the hadoop core configuration file core-site.xml, mainly to configure the HDFS address and port number;
  2. Modify the HDFS configuration file hdfs-site.xml in hadoop, mainly to configure replication;
  3. Modify the hadoop mapreduce configuration file mapred-site.xml, mainly to configure the jobtracker address and port;

Before specific operations, create Several folders in the hadoop directory:

The following describes how to build and test the pseudo-distributed architecture:

First configure the core-site.xml file:

Go to the core-site.xml file:

The content of the configured file is as follows:

Run the ": WQ" command to save and exit.

Next configure the hdfs-site.xml to open the file:

Open file:

The configured file:

Enter ": WQ" to save the modification information and exit.

Next, modify the mapred-site.xml configuration file:

Enter the configuration file:

The contents of the modified mapred-site.xml configuration file are:

Run the ": WQ" command to save and exit.

Through the above configuration, we have completed the simplest pseudo-distributed configuration.

Next, format the hadoop namenode:

Enter "Y" to complete the formatting process:

Start hadoop!

 

Start hadoop as follows:

Use the JPS command that comes with Java to query all daemon processes:

Start hadoop !!!

Next, you can view the hadoop running status on the Web page used to monitor the cluster status in hadoop. The specific page is as follows:

Http: // localhost: 50030/jobtracker. jsp

Http: // localhost: 50060/tasttracker. jsp
Http: // localhost: 50070/dfshealth. jsp

 

The above hadoop running status monitoring page indicates that our pseudo-distributed development environment has been fully set up!

Next, run the wordcount program on the newly created pseudo-Distributed Platform:

First, create the input directory in DFS:

In this case, because no specific HDFS directory is specified for the file created, the "input" Directory will be created under the current user "Rocky" to view the Web console:

Copy an object

Spark tutorial-Build a spark cluster-configure the hadoop pseudo distribution mode and run the wordcount example (1)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.