Start, stop, and stop the daemon in Hadoop.
Version Hadoop-1.2.1
Script description
The start-all.sh starts all Hadoop daemon. Including NameNode, Secondary NameNode, DataNode, JobTracker, TaskTrack
The stop-all.sh stops all Hadoop daemon. Including NameNode, Secondary NameNode, DataNode, JobTracker, TaskTrack
Start-dfs.sh starts Hadoop HDFS daemon NameNode, SecondaryNameNode and DataNode
Stop-dfs.sh stops Hadoop HDFS daemon NameNode, SecondaryNameNode and DataNode
Hadoop-daemons.sh start namenode start NameNode daemon separately
Hadoop-daemons.sh stop namenode stop NameNode daemon separately
Hadoop-daemons.sh start datanode start DataNode daemon separately
Hadoop-daemons.sh stop datanode stop DataNode daemon separately
Hadoop-daemons.sh start secondarynamenode start SecondaryNameNode daemon separately
Hadoop-daemons.sh stop secondarynamenode stop SecondaryNameNode daemon separately
Start-mapred.sh starts Hadoop MapReduce daemon JobTracker and TaskTracker
Stop-mapred.sh stops Hadoop MapReduce daemon JobTracker and TaskTracker
Hadoop-daemons.sh start jobtracker start JobTracker daemon separately
Hadoop-daemons.sh stop jobtracker stop JobTracker daemon separately
Hadoop-daemons.sh start tasktracker start TaskTracker daemon separately
Hadoop-daemons.sh stop tasktracker start TaskTracker daemon separately
If the Hadoop cluster is started for the first time, you can use the start-all.sh. A common start method is to start a daemon by using the following daemon.
1. Start the daemon process in the HDFS module of Hadoop.
The daemon processes in HDFS are also started in sequence, namely:
1) Start the NameNode daemon;
2) Start the DataNode daemon;
3) Start the SecondaryNameNode daemon.
2. Start the daemon process in the MapReduce Module
The startup of the MapReduce daemon is also ordered, namely:
1) Start the JobTracker daemon;
2) Start the TaskTracker daemon.
Install and configure Hadoop2.2.0 on CentOS
Build a Hadoop environment on Ubuntu 13.04
Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1
Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)
Configuration of Hadoop environment in Ubuntu
Detailed tutorial on creating a Hadoop environment for standalone Edition
Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)