Hadoop 2.4.1 Deployment--2 single node installation

Source: Internet
Author: User
Tags mkdir hadoop fs
Hadoop 2.4.1 Virtual machine installation, single node installation
1 Java environment variable settings 2 set account, host Hostname/etc/hosts user's. Bash_profile add the following content export JAVA_HOME=/USR/JAVA/JDK1.7.0_60 export HA doop_prefix=/home/hadoop/hadoop-2.4.1 export classpath= ".: $JAVA _home/lib: $CLASSPATH" Export path= "$JAVA _home/:$ Hadoop_prefix/bin: $PATH "Export Hadoop_prefix PATH CLASSPATH 3 set No password login
Make sure that the firewall for all hosts is turned off first. $CD ~/.ssh $ssh-keygen-t RSA--------------------and press ENTER, the generated key is saved in the. ssh/id_rsa file according to the default options. $CP id_rsa.pub Authorized_keys
sudo service sshd restart
4 The configuration of Hadoop
Enter the hadoop2.4.1 folder to configure the files in Etc/hadoop. HADOOP-ENV.SH Export java_home=/usr/java/jdk1.7.0_60 Additional optional additions: Export Hadoop_common_lib_native_dir=${hadoop_prefix} /lib/native export hadoop_opts= "-djava.library.path= $HADOOP _prefix/lib"

5 Core-site.xml
<configuration>        <property>                 <name>fs.default.name</name>                 <value>hdfs:// localhost:9000</value>        </property>        <property> & nbsp               <name>io.file.buffer.size</name>                 <value>131072</value>         </property>        <property>                <name>hadoop.tmp.dir</ name>                <value>file:/home/hadoop/tmp</value>        </property>         <property>           &NBS P     <name>dfs.namenode.name.dir</name>                <value>file:/home/hadoop/ hadoop-2.4.1/dfs/name</value>        </property>        < property>                <name>dfs.datanode.data.dir</name>                <value>file:/home/hadoop/hadoop-2.4.1/dfs/data</value >        </property> </configuration>
Hdfs-site.xml <configuration> <property> &LT;NAME&GT;DFS.NAMENODE.NAME.DIR&LT;/NAME&G                T                <value>file:/home/hadoop/hadoop-2.4.1/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/hadoop/hadoop-2.4.1/dfs/dat                a</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
Mapred-site.xml
<configuration> <property> <name>mapreduce.jobtracker.address</name> <value>hdfs://localhost:9001</value> </property> </configuration>
Yarn-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.nodem Anager.aux-services</name> <value>mapreduce_shuffle</value> </property> < /configuration>

Since then, the stand-alone environment configuration has been completed
The following starts:./bin/hadoop Namenode–format formatted node information
Bin/start-all.sh. The new version of Hadoop actually does not suggest so direct start-all, suggesting step by step, God horse Start-dfs, and then in Start-map
./bin/hadoop Dfsadmin-report
http://localhost:50070


Common commands for Hadoop: From Network 1, list all the commands that the Hadoop shell supports
$ bin/hadoop Fs-help
2. Display detailed information about a command
$ bin/hadoop Fs-help Command-name
3, the user can use the following command to view the history log summary under the specified path
$ bin/hadoop job-history Output-dir
This command displays details of the job, the details of the failed and terminated tasks.
4. More details about the job, such as the successful task, and the number of attempts on each task can be viewed with the following command
$ bin/hadoop job-history All Output-dir
5, the format of a new Distributed file system:
$ bin/hadoop Namenode-format
6, on the allocation of Namenode, run the following command to start HDFs:
$ bin/start-dfs.sh
The bin/start-dfs.sh script starts the Datanode daemon on all listed slave, referencing the contents of the ${hadoop_conf_dir}/slaves file on Namenode.
7, on the allocation of Jobtracker, run the following command to start Map/reduce:
$ bin/start-mapred.sh
The bin/start-mapred.sh script starts the Tasktracker daemon on all listed slave, referencing the contents of the ${hadoop_conf_dir}/slaves file on Jobtracker.
8. On the assigned Namenode, execute the following command to stop HDFs:
$ bin/stop-dfs.sh
The bin/stop-dfs.sh script stops the Datanode daemon on all listed slave, referencing the contents of the ${hadoop_conf_dir}/slaves file on Namenode.
9. On the assigned Jobtracker, run the following command to stop Map/reduce:
$ bin/stop-mapred.sh
The bin/stop-mapred.sh script stops the Tasktracker daemon on all listed slave, referencing the contents of the ${hadoop_conf_dir}/slaves file on Jobtracker.

Dfsshell
10. Create a directory named/foodir
$ bin/hadoop Dfs-mkdir/foodir
11. Create a directory named/foodir
$ bin/hadoop Dfs-mkdir/foodir
12. View the contents of the file named/foodir/myfile.txt
$ bin/hadoop Dfs-cat/foodir/myfile.txt

Dfsadmin
13. Put the cluster in safe mode
$ Bin/hadoop Dfsadmin-safemode Enter
14, display Datanode list
$ bin/hadoop Dfsadmin-report
15, make Datanode node Datanodename retired

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.