6 HDFS installation process
1) Unpack the installation package
[Email protected]:/usr/local# tar-zxvf hadoop-2.4.0.tar.gz
If you are not using root user decompression, we recommend using Chown to modify the folder properties (for example, the current user is xiaoming)
[Email protected]:/usr/local# sudo chown-r xiaoming:xiaoming Hadoop
If the cluster is a 64-bit operating system, you need to replace the Lib/native folder, or a warning prompt will appear
2) Modify the configuration file
There are 7 main configuration files involved:
/usr/local/hadoop/etc/hadoop/hadoop-env.sh/usr/local/hadoop/etc/hadoop/yarn-env.sh/usr/local /hadoop/etc/hadoop/Slaves/usr/local/hadoop/etc/hadoop/core-site.xml/usr/local/hadoop/etc/ hadoop/hdfs-site.xml/usr/local/hadoop/etc/hadoop/mapred-site.xml/usr/local/hadoop/etc/hadoop /yarn-site.xml
Two-step installation, first configuring HDFS, then configuring yarn
Configuring HDFS requires modification of files including hadoop-env.sh, slaves, core-site.xml, and Hdfs-site.xml
2.1) Modify hadoop-env.sh
Add at the bottom of the file
Export java_home=/usr/local/java/jdk1.7.0_79
2.2) Modify Slaves
The slaves file is primarily set from the name of the node
Slave1slave2slave3slave4
2.3) Modify Core-site.xml
Refer to the official website document to add settings in <Configuration></Configuration>
<property> <name>fs. Defaultfs</name> <value>hdfs://master:8020</value> < Final>true</final> </property> <property> <name >io.file.buffer.size</name> <value>131072</value> </property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
2.4) Modify Hdfs-site.xml
Refer to the official website document to add settings in <Configuration></Configuration>
Add in Namenode
<property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop/dfs/ name</value> </property> <property> <name>dfs.blocksize</name> <value>67108864</value> </property> <property> <name> dfs.namenode.handler.count</name> <value>100</value> </property> < property> <name>dfs.namenode.hosts</name> <value>slave1,slave2,slave3,slave4 </value> </property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
Add in Datanode
<property> <name>dfs.datanode.data.dir</name> <value>/usr/local/hadoop/dfs/ Data</value> </property>
2.5) Format HDFs
[Email protected]:/usr/local/hadoop# HDFs Namenode-format
2.6) Start the cluster
You can start the cluster via ~/sbin/start-dfs.sh, or you can start each node individually by hadoop-daemon.sh start Datanode
[Email protected]:/usr/local/hadoop# start-dfs.sh
Namenode on JPS command
[Email protected]:/usr/local/hadoop/sbin# JPS4760 NameNode5103 secondarynamenode13518 Jps
Datanode on JPS command
[Email protected]:/usr/local/hadoop/sbin# JPS7258 JPS3042 DataNode
2.7) Uploading Files
Create a new two text file
[Email protected]:/usr/local/hadoop# echo "Hello World" >> File1[email protected]:/usr/local/hadoop# echo "Hello Hadoop" >> file2
Uploading to HDFs
[Email protected]:/usr/local/hadoop# hdfs dfs-put file*/
displaying file information
[Email protected]:/usr/local/hadoop# hdfs dfs-ls/2 items-rw-r--r-- 3 root supergroup 12 2016-06-02 20:02/file1-rw-r--r-- 3 root supergroup 2016-06-02 20:02/file2
Build a Hadoop cluster tips (2)