Only recently began to touch Hadoop, the first thing to do is to install Hadoop, before you install Hadoop, you need to make the following preparations
A Linux environment, I installed CentOS using VMware's virtual machine environmentthis please yourself Baidu a bit, it is really too bigLinux installation package for JDK 1.6 or moreinstallation package for Hadoop 2.6.0Note that I am using the 64-bit Linux, so Java is a 64-bit installation packageHttp://pan.baidu.com/s/1kT3PYLL
Hadoop cluster supports three modes1. Stand-alone mode2. Pseudo-distribution is a pattern3. Fully Distributedbecause it is learning to use, so I install the second mode.
Assuming you have a Linux environment installed, the Hadoop installation process is as follows:
1. Installing Javaput the above Java installation package into the Linux/usr/java directory, assigning permissions to program executionchmod +x Jdk-6u45-linux-x64-rpm.binthen execute./Jdk-6u45-linux-x64-rpm.bin install Java, environment variables need to be configured after installation is completeVI ~/.bash_profileAdd the following paragraphExport java_home=/usr/java/jdk1.6.0_452. Password-free SSH settingsThe command is as followsssh-keygen-t RSAthen always enter, the last key will be saved in the ~/.ssh, and then enter the. SSH directoryexecute the following commandCP id_rsa.pub Authorized_keysFinally, using SSH localhost to verify the success, if you do not enter the password that means success
3. Extracting HadoopI put the Hadoop installation package in the/tmp directory ahead of time and unzip it.TAR-ZXVF hadoop-2.6.0.tar.gz-c/hadoop
4. Edit the etc/hadoop/hadoop-env.sh, set as follows
Export Java_home=/usr/java/lates
5. Modify the relevant configuration file
Etc/hadoop/core-site.xml:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs:// Localhost:9000</value> </property></configuration>
Etc/hadoop/hdfs-site.xml:
<configuration> <property> <name>dfs.replication</name> <value>1< /value> </property></configuration>
Etc/hadoop/mapred-site.xml:
<configuration> <property> <name>mapreduce.framework.name</name> <value >yarn</value> </property></configuration>
Etc/hadoop/yarn-site.xml:
<configuration> <property> <name>yarn.nodemanager.aux-services</name> < Value>mapreduce_shuffle</value> </property></configuration>
5. Verify that the installation is successful
- Format File system $ bin/hdfs Namenode-format
- start Datanode and Namenode $ sbin/start-dfs.sh
- Manage Namenode with your browser
http://localhost:50070/
Hadoop 2.6.0 Installation process