<title>VirtualBox build Pseudo-distributed mode: Hadoop Download and configuration</title> VirtualBox build Pseudo-distributed mode: Hadoop Download and configuration
As a result of personal machine slightly slag, unable to deploy Xwindow environment, direct use of the shell to operate, want to use the mouse to click the operation of the left do not send ~
1.hadoop Download and decompression
http://mirror.bit.edu.cn/apache/hadoop/common/stable2/hadoop-2.7.1.tar.gz
/usr/hadoop
tar -xzvf hadoop-2.7.1.tar.gz
mv hadoop-2.7.1/usr/hadoop/
2. Under the/usr/hadoop/directory, set up TMP, Hdfs/name, hdfs/data directories
/usr/
/usr/
/usr/hadoop/hdfs/
/usr/hadoop/hdfs/name
3. Configure the Environment variables
Move into the Hadoop folder that you just unzipped
cd /usr/hadoop/hadoop-2.7.1
①hadoop configuration file specifying the Java path
Etc/hadoop/hadoop-env.sh and yarn-env.sh
Comment out the previous Java path with # and add it to your Java path, such as:
export JAVA_HOME=/usr/java/jdk1.8.0_20
② is /etc/profile
added later
export HADOOP_HOME=/usr/hadoop/hadoop-2.7.1
export PATH=$PATH:$HADOOP_HOME/bin
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
The next two sentences, if not, could come.
library /usr/hadoop/hadoop-2.7.0/lib/native/libhadoop.so.1.0.0to fix the stack guard now.
It‘slibrarywith‘execstackorwith ‘-z noexecstack‘.
Once added, remember to . /etc/profile
make the command effective
Perform Hadoop version view success
Obviously, it's a success.
4. Modify the Hadoop configuration file
Enter etc/hadoop/
① Modify Core-site.xml, plus
<property >
<name>Fs.defaultfs</name>
<value>hdfs://192.168.56.120:9000</value>
</Property >
<property >
<name>Hadoop.tmp.dir</name>
<value>File:/usr/hadoop/tmp</value>
</Property >
② Modify Hdfs-site.xml, plus
<property >
<name>Dfs.namenode.name.dir</name>
<value>File:/usr/hadoop/hdfs/name</value>
</Property >
<property >
<name>Dfs.datanode.data.dir</name>
<value>File:/usr/hadoop/hdfs/data</value>
</Property >
<property >
<name>Dfs.replication</name>
<value>1</value>
</Property >
<property >
<name>Dfs.namenode.secondary.http-address</name>
<value>192.168.56.121:9001</value>
</Property >
<property >
<name>Dfs.webhdfs.enabled</name>
<value>True</value>
</Property >
③ Modify Mapred-site.xml.template, plus
<property >
<name>Mapreduce.framework.name</name>
<value>Yarn</value>
</Property >
<property >
<name>Mapreduce.jobhistory.address</name>
<value>192.168.56.120:10020</value>
</Property >
<property >
<name>Mapreduce.jobhistory.webapp.address</name>
<value>192.168.56.120:19888</value>
</Property >
④ Modify Yarn-site.xml, plus
<property >
<name>Yarn.nodemanager.aux-services</name>
<value>Mapreduce_shuffle</value>
</Property >
<property >
<name>Yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>Org.apache.hadoop.mapred.ShuffleHandler</value>
</Property >
<property >
<name>Yarn.resourcemanager.address</name>
<value>192.168.56.120:8032</value>
</Property >
<property >
<name>Yarn.resourcemanager.scheduler.address</name>
<value>192.168.56.120:8030</value>
</Property >
<property >
<name>Yarn.resourcemanager.resource-tracker.address</name>
<value>192.168.56.120:8031</value>
</Property >
<property >
<name>Yarn.resourcemanager.admin.address</name>
<value>192.168.56.120:8033</value>
</Property >
<property >
<name>Yarn.resourcemanager.webapp.address</name>
<value>192.168.56.120:8088</value>
</Property >
<property >
<name>Yarn.nodemanager.resource.memory-mb</name>
<value>768</value>
</Property >
⑤ configuration slaves, plus your slave server, such as:
or comment out the original localhost, plus your slave server name (previous Hosts file function is able to recognize from server name and IP address)
#localhost
slave1
slave2
⑥ the primary server executes
bin/hdfs namenode -format
To initialize
If the successfully formatted is displayed and the return value is status 0 then the format succeeds, if the status 1 is displayed then the revolution has not been successful, comrade, you still check the previous steps again, and then format until successful.
Executed under the ⑦sbin directory./start-all.sh
⑧ can use JPS to view information
⑨ Stop, enter the command, sbin/stop-all.sh
This is probably the case, and maybe a little bit of bug →_→
Hadoop Build Notes: Installation configuration for Hadoop under Linux