The key steps of the HDFS cluster environment are recorded in the Linux environment.
Media version: hadoop-2.7.3.tar.gz
Number of nodes: 3 nodes.
First, download the installation media
Official website: http://hadoop.apache.org/releases.html
Second, the server planning
Master:namenode, DATANODE
Node1:datanode
Node2:secondary NAMENODE, DATANODE
III. Configuring hostname and hosts
192.168.13.4 Master
192.168.13.5 Node1
192.168.13.2 Node2
Iv. Upload and Unzip
Upload the downloaded installation media to the server and unzip it.
Unzip:tar zxvf hadoop-2.7. 3. tar. gz
V. Create a Data Catalog
mkdir /data/hdfs_datasmkdir -p/data/hdfs_datas/data/tmpchmod 755 hdfs_datas/
Vi. Modifying configuration files
Configure Core-site.xml
Configure Hdfs-site.xml
Configure hadoop-env.sh
/usr/lib/jvm/java-8-openjdk-arm64/bin/java
/usr/lib/jvm/java-1.7.0-openjdk-arm64/bin/java
Configure Slaves
Vii. SSH Free Login
Ssh-keygen -t RSASSH-copy-ID192.168. 13.6 ssh-copy-ID192.168. 13.7 ssh-copy-ID192.168. 13.8
Viii. Distribution of the SCP
SCP -R hadoop-2.7. 3/[email protected]192.168. 13.5:/opt/SCP -R hadoop-2.7. 3/[email protected]192.168. 13.2:/opt/
Nine, formatted Namenode
./hadoop Namenode-format
X. Start and Stop HDFs service
CD
./start-dfs.sh
./stop-dfs.sh
Key steps of HDFS cluster environment in Linux environment