[Nutch] Configuration of the Hadoop single-machine pseudo-distribution mode

Source: Internet
Author: User

In the previous blog post, we have been using Nutch's local mode, so how does the Nutch deploy mode work? First, we'll configure Hadoop to prepare for the Deploy mode using Nutch.

1. Download Hadoop

In the workspace directory, use the following command to download Hadoop 1.2.1:

wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

To unzip after downloading:

tar -zxvf hadoop-1.2.1.tar.gz
2. Setting up the Hadoop runtime environment

Add the path of Hadoop to the current user's profile (. bashrc):
To open a configuration file with Vim

vim ~/.bashrc

Add the path of the Doop to path:

PATH=/home/kandy/workspace/hadoop-1.2.1/bin:$PATH

As follows:

Re-login to the current account can be effective:

ssh localhost

To view the path to Hadoop:

which hadoop

The results are as follows:

3. Configure Hadoop Operating parameters

Enter the root directory of Hadoop:

cd hadoop-1.2.1
3.1 Configuring the Core-site.xml file

Use Vim to open the Core-site.xml file in the Conf directory:

vim conf/core-site.xml

Add the following content to the file:

<property ><name>Fs.default.name</name><value>hdfs://localhost:9000</value></Property ><property ><name>Hadoop.tmp.dir</name><value>/home/kandy/workspace/tmp</value></Property >

As follows:

3.2 Configuring Hdfs-site.xml

Use Vim to open the Hdfs-site.xml file under the Conf directory:

vim conf/hdfs-site.xml

Add the following content to the file:

<property >  <name>Dfs.name.dir</name>  <value>/home/kandy/workspace/dfs/filesystem/name</value></Property ><property >  <name>Dfs.data.dir</name>  <value>/home/kandy/workspace/dfs/filesystem/data</value></Property ><property >  <name>Dfs.replication</name>  <value>1</value></Property >

As follows:

3.3 Configuring Mapred-site.xml

Open the Mapred-site.xml file under the Conf directory with vim:

vim conf/mapred-site.xml

Add the following content to the file:

<property >  <name>Mapred.job.tracker</name>  <value>localhost:9001</value></Property ><property >   <name>Mapred.tasktracker.map.tasks.maximum</name>  <value>2</value></Property > <property >   <name>Mapred.tasktracker.reduce.tasks.maximum</name>  <value>2</value></Property ><property >  <name>Mapred.system.dir</name>  <value>/home/kandy/workspace/mapreduce/system</value></Property ><property >  <name>Mapred.local.dir</name>  <value>/home/kandy/workspace/mapreduce/local</value></Property >

As follows:

3.4 Configuring the Hadoop-env.sh file

Use Vim to open the hadoop-env.sh file under the Conf directory:

vim conf/hadoop-env.sh

In the configuration Java_home, add the following content:

JAVA_HOME=/usr/lib/jvm/java-8-oracle


4. Format the name node and start the cluster

Use the following command:

hadoop namenode -format

Such as:

You can see the relevant information from there.

5. Start the cluster and view the Web Management Interface 5.1 start the cluster

Start the cluster with the following command:

start-all.sh


You can see several more processes using the JPS command:

There are several processes that indicate a successful start.

5.2 Viewing the Web Administration page

Access http://192.168.238.130:50030 to view the running status of Jobtracker:

Access http://192.168.238.130:50060 to view the running status of Tasktracker:

Access http://192.168.238.130:50070 can view the status of the NameNode and the entire Distributed file system, browse files in the Distributed file system, and log, among others:

[Nutch] Configuration of the Hadoop single-machine pseudo-distribution mode

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.