1. Modify the Core-site.xml, plus
<property>
<name>fs.defaultFS</name>
<value>hdfs://backup02:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/zhongml/hadoop-2.7.2/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
2. Modify the Hdfs-site.xml, plus
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/zhongml/hadoop-2.7.2/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/zhongml/hadoop-2.7.2/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>backup02:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
3. First copy a Mapred-site.xml
CP Mapred-site.xml.template Mapred-site.xml
Modify the Mapred-site.xml, plus
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>backup02:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>backup02:19888</value>
</property>
4. Modify the Yarn-site.xml, plus
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>backup02:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>backup02:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>backup02:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>backup02:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>backup02:8088</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>768</value>
</property>
5. Configure the Java_home of hadoop-env.sh, yarn-env.sh in the/home/zhongml/hadoop-2.7.2/etc/hadoop directory
6. Configure the/home/zhongml/hadoop-2.7.2/etc/hadoop directory under slaves
7. Verify that Startup is successful
Bin/hadoop Fs-ls
http://localhost:50030 (MapReduce page)
http://localhost:50070 (HDFs page)
8. If it is a 64-bit Linux, you need to overwrite native
9. Configure the Java environment variable/etc/profile
First uninstall the JDK from Linux,
Execute the following command to see the options you need to uninstall:
Rpm-qa | grep GCJ
To perform an unload operation:
Yum-y Remove java-1.4.2-gcj-compat-1.4.2.0-40jpp.115
Add the following to the end of the/etc/profile
java_home=/jdk/jdk1.8.0_101/
Path= $JAVA _home/bin: $PATH
Classpath=.: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jar
Export Java_home
Export PATH
Export CLASSPATH
Note: To install the Hadoop environment, you must turn off the firewall:
Shutting down the Firewall service
[[Email protected] hadoop]# service iptables stop
Turn off start automatically
[Email protected] hadoop]# chkconfig iptables off
Install the following software:
Yum Install SVN
Yum Install Autoconfautomake Libtool cmake
Yum Install Ncurses-devel
Yum Install Openssl-devel
Yum Install gcc*
This article is from the "Zhongmaolin blog" blog, please be sure to keep this source http://zhongml.blog.51cto.com/4808277/1867786
hadoop2.7.2 modifying configuration files, configuring Linux Java environment variables