Hadoop2.7.7 deployment
1. Install centos7 in vmware14 (Procedure omitted)
Ii. Configure Linux
1. Disable the Firewall:
2. view the current Java version, which is
3. Delete openjdk
CommandRpm-Qa | grep JavaSearch for Java-Related Files
Command:Rpm-e-nodepsDelete the built-in JAVA
4. Install JDK
Download the official 1.8jdk
Command rpm-IVH + file name to install JDK
Installation Complete
4. Download hadoop:
Download hadoop from the official website and upload it to the Virtual Machine
5. Continue to create two VMS
Use the clone function of the virtual machine to clone the cmaster in two copies: slave0 and slave1.
Cloning completed
6. Modify the host names cmaster, slave0, and slave1 respectively.
vim /etc/hostname
7. Add domain name ing:
Ifconfig # view the IP addresses of the three VMS
Vim/etc/hosts # Add domain name ing for three machines
Ping different machines separately
Ping successful
3. Install hadooop
1. Decompress hadoop
1 tar -zxvf hadoop-2.7.7.tar.gz
Decompress hadoop on three machines respectively
2. Configure hadoop (required for all three machines)
2.1 edit a file
1 vim /home/krysent/hadoop-2.7.7/etc/hadoop/hadoop-env.sh
Add Java path
2.2 In the/home/krysent/hadoop-2.7.7/etc/hadoop/directory
Add:
<property> <name>hadoop.tmp.dir</name> <value>/home/krysent/cloudData</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://cMaster:8020</value> </property>
2.3 In the/home/krysent/hadoop-2.7.7/etc/hadoop/directory
Add:
<property> <name>yarn.resourcemanager.hostname</name> <value>cMaster</value></property><property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value></property>
2.4 rename mapred-site-. xml. Template under the/home/krysent/hadoop-2.7.7/etc/hadoop/directory to mapred-site-. xml and add in the configuration Tag:
<property> <name>mapreduce.farmework.name</name> <value>yarn</value></property>
4. Start hadoop
1. master node cmaster format:
/home/krysent/hadoop-2.7.7/bin/hdfs namenode -format
2. The master node cmaster starts the storage master service namenode and Resource Management Master service ResourceManager:
1 /home/krysent/hadoop-2.7.7/sbin/hadoop-daemon.sh start namenode3 /home/krysent/hadoop-2.7.7/sbin/yarn-daemon.sh start resourcemanager
3. Start the storage slave service datanode and Resource Management slave service nodemanager from the node (both slave0 and slave1 are required ):
/home/krysent/hadoop-2.7.7/sbin/hadoop-daemon.sh start datanode
/home/krysent/hadoop-2.7.7/sbin/yarn-daemon.sh start nodemanager
V. Test hadoop:
[[email protected] hadoop-2.7.7]$ /usr/java/jdk1.8.0_191-amd64/bin/jps17826 Jps9942 ResourceManager8908 NameNode
[[email protected] hadoop]$ /usr/java/jdk1.8.0_191-amd64/bin/jps15890 Jps8501 DataNode8684 NodeManager
[[email protected] hadoop-2.7.7]$ /usr/java/jdk1.8.0_191-amd64/bin/jps8578 NodeManager8707 DataNode15764 Jps
Firefox address bar inputCmaster: 50070View:
Vi. Use
1. Create an in directory in the Cluster
/home/krysent/hadoop-2.7.7/bin/hdfs dfs -mkdir /in
2. upload local files to HDFS
/home/krysent/bin/hdfs dfs -put /home/krysent/hadoop-2.7.7/etc/hadoop/* /in
3. Use the wordcount sample program to calculate data
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar wordcount /in /out/wc-01
View Firefox:
Centos7 install and deploy the hadoop Environment