Note: The following installation steps are performed in the Centos6.5 operating system, and the installation steps are also suitable for other operating systems, such as students using Ubuntu and other Linux Operating system, just note that individual commands are slightly different.
Note the actions of different user permissions, such as shutting down the firewall and requiring root privileges.
A single-node hadoop installation can be problematic in the following ways: Configuration of the JDK environment, firewall shutdown,root user and different operations for Hadoop users.
In the process of building carefully, follow the steps below, basically will not have any problems.
first, prepare for work (root user)
1. Turn off the firewall
Shutdown Firewall: Service iptables stop shutdown boot: chkconfig iptables off
2. Create a user
Create a Hadoop User: useradd hadoop password: passwd Hadoop joins Sudoers:vim/etc/sudoers, and writes to Hadoop all= (All) all in a row under root
3. Modificationshostsfile
On the last line of the /etc/hosts file, add:
127.0.0.1hadoop
Second, installation JDK1.8 (root user )
1.View installedJDK
Rpm-qa |grep Javarpm-qa |grep JDK
2. Uninstall the program shown in the previous step
RPM-E--nodeps program name (e.g.: Rpm-e--nodeps tzdata-java-2013g-1.el6.noarchrpm-e--nodeps JAVA-1.7.0-OPENJDK-1.7.0.45-2.4.3.3.EL6.X86_64RPM-E--nodeps java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64)
3. InstallationJDK1.8
RPM-IVH jdk-8-linux-x64.rpm (executes this instruction in the same directory as the installation file, before installing the. rpm file in any directory, left post JDK installed in/usr/java/jdk1.8.0 by default)
4. Modifying environment variables
Modify /etc/profile file, add the following lines at the end of the file:
Export Java_home=/usr/java/jdk1.8.0export jre_home=/usr/java/jdk1.8.0/jreexport path= $JAVA _home/bin: $JRE _home/bin : $PATHexport classpath=.: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jar
5. Make the newly modified environment variable effective
Source/etc/profile
6. VerificationJDKwhether the installation was successful
Java-versionecho $JAVA _home
Third, SSHNo password login (Hadoopusers) 1. Generate key
Ssh-keygen-t DSA (then press Enter and the. SSH folder is automatically generated, with two files in it)
2. BuildAuthorized_keys
Enter the /home/hadoop/.ssh directory
Cat Id_dsa.pub >> Authorized_keys
3. Give Authorized_keysGive Execute permission
chmod Authorized_keys
4. Test if you can log on locally without a password
SSH localhost
If you do not need to enter the password again, the success
Four, installationHadoop(Hadoopusers)
1. extract to the specified directory (for example in the /home/hadoop directory)
TAR-ZXVF hadoop-2.5.1.tar.gz
2. Configuration files
Configuration files in the/home/hadoop/hadoop-2.5.1/etc/hadoop/directory
2.1.core-site.xml file
Add the following between <configuration> and </configuration>
<property><name>fs.defaultfs</name><value>hdfs://localhost:9000</value></ Property> <property><name>hadoop.tmp.dir</name><value>/home/hadoop/hadoop-2.5.1/tmp </value></property>
2.2.hdfs-site.xmlfile
<property><name>dfs.namenode.name.dir</name> <value>/home/hadoop/hadoop-2.5.1/name </value> </property> <property><name>dfs.datanode.data.dir</name> <value>/home/hadoop/hadoop-2.5.1/data</value> </property> <property><name >dfs.replication</name> <value>1</value></property>
Note:both the/home/hadoop/hadoop-2.5.1/data and /home/hadoop/hadoop-2.5.1/name directories should be present.
2.3.mapred-site.xmlfile
<property><name>mapreduce.framework.name</name> <value>yarn</value> < /property>
2.4.mapred-env.shfile
Export Java_home=/usr/java/jdk1.8.0export hadoop_mapred_pid_dir=/home/hadoop/hadoop-2.5.1/tmp
2.5.hadoop-env.shfile
Export Java_home=/usr/java/jdk1.8.0export Hadoop_pid_dir=/home/hadoop/hadoop-2.5.1/tmpexport HADOOP_SECURE_DN_PID _dir=/home/hadoop/hadoop-2.5.1/tmp
2.6.yarn-site.xml file
<property><name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</ Value> </property>
2. TheHadoopAdding environment Variables
sudo vim/etc/profile Add the following two lines to export Hadoop_home=/home/hadoop/hadoop-2.5.1export path= $HADOOP _home/bin: $HADOOP _home/ Sbin: $PATH
V. Start-Up (Hadoopusers)
1. FormattingNamenode
HDFs Namenode-format
If successful, the current folder is generated in /home/hadoop/hadoop-2.5.1/name/
2. start namenode and datanode
hadoop-daemon.sh start namenodehadoop-daemon.sh start Datanode
The ability to verify the start-up success by JPS
3. StartYarn
start-yarn.sh
Input JPS validation
4. viewing on the web side
Input ip:50070(example:http://192.168.56.103:50070/)
Vi.. OperationWordCountExample (Hadoopusers)
Wordcount examples in the /home/hadoop/hadoop-2.5.1/share/hadoop/mapreduce Hadoop-mapreduce-examples-2.5.1.jar
1. Upload Local files toHDFs
Hadoop fs-put file/test (e.g., Hadoop fs-put 1/test is uploading the local file 1 to the/test directory in HDFs)
2. Running
Hadoop jar Hadoop-mapreduce-examples-2.5.1.jar WORDCOUNT/TEST/1/TEST/OUTPUT/1
Note:/test/output/1 must be a directory that does not exist
3. View Results
Hadoop fs-cat/test/output/1/part-r-00000
Hadoop platform for Big Data (ii) Centos6.5 (64bit) Hadoop2.5.1 pseudo-distributed installation record, WordCount run test