Hadoop 2.4.1 Virtual machine installation, single node installation
1 Java environment variable settings 2 set account, host Hostname/etc/hosts user's. Bash_profile add the following content export JAVA_HOME=/USR/JAVA/JDK1.7.0_60 export HA doop_prefix=/home/hadoop/hadoop-2.4.1 export classpath= ".: $JAVA _home/lib: $CLASSPATH" Export path= "$JAVA _home/:$ Hadoop_prefix/bin: $PATH "Export Hadoop_prefix PATH CLASSPATH 3 set No password login
Make sure that the firewall for all hosts is turned off first. $CD ~/.ssh $ssh-keygen-t RSA--------------------and press ENTER, the generated key is saved in the. ssh/id_rsa file according to the default options. $CP id_rsa.pub Authorized_keys
sudo service sshd restart
4 The configuration of Hadoop
Enter the hadoop2.4.1 folder to configure the files in Etc/hadoop. HADOOP-ENV.SH Export java_home=/usr/java/jdk1.7.0_60 Additional optional additions: Export Hadoop_common_lib_native_dir=${hadoop_prefix} /lib/native export hadoop_opts= "-djava.library.path= $HADOOP _prefix/lib"
Since then, the stand-alone environment configuration has been completed
The following starts:./bin/hadoop Namenode–format formatted node information
Bin/start-all.sh. The new version of Hadoop actually does not suggest so direct start-all, suggesting step by step, God horse Start-dfs, and then in Start-map
./bin/hadoop Dfsadmin-report
http://localhost:50070
Common commands for Hadoop: From Network 1, list all the commands that the Hadoop shell supports
$ bin/hadoop Fs-help
2. Display detailed information about a command
$ bin/hadoop Fs-help Command-name
3, the user can use the following command to view the history log summary under the specified path
$ bin/hadoop job-history Output-dir
This command displays details of the job, the details of the failed and terminated tasks.
4. More details about the job, such as the successful task, and the number of attempts on each task can be viewed with the following command
$ bin/hadoop job-history All Output-dir
5, the format of a new Distributed file system:
$ bin/hadoop Namenode-format
6, on the allocation of Namenode, run the following command to start HDFs:
$ bin/start-dfs.sh
The bin/start-dfs.sh script starts the Datanode daemon on all listed slave, referencing the contents of the ${hadoop_conf_dir}/slaves file on Namenode.
7, on the allocation of Jobtracker, run the following command to start Map/reduce:
$ bin/start-mapred.sh
The bin/start-mapred.sh script starts the Tasktracker daemon on all listed slave, referencing the contents of the ${hadoop_conf_dir}/slaves file on Jobtracker.
8. On the assigned Namenode, execute the following command to stop HDFs:
$ bin/stop-dfs.sh
The bin/stop-dfs.sh script stops the Datanode daemon on all listed slave, referencing the contents of the ${hadoop_conf_dir}/slaves file on Namenode.
9. On the assigned Jobtracker, run the following command to stop Map/reduce:
$ bin/stop-mapred.sh
The bin/stop-mapred.sh script stops the Tasktracker daemon on all listed slave, referencing the contents of the ${hadoop_conf_dir}/slaves file on Jobtracker.
Dfsshell
10. Create a directory named/foodir
$ bin/hadoop Dfs-mkdir/foodir
11. Create a directory named/foodir
$ bin/hadoop Dfs-mkdir/foodir
12. View the contents of the file named/foodir/myfile.txt
$ bin/hadoop Dfs-cat/foodir/myfile.txt
Dfsadmin
13. Put the cluster in safe mode
$ Bin/hadoop Dfsadmin-safemode Enter
14, display Datanode list
$ bin/hadoop Dfsadmin-report
15, make Datanode node Datanodename retired
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.