In Linux
You first need to install the JDK and configure the appropriate environment variables
Download the hadoop1.2.1 version by wget, if it is a production environment use 1.* version is recommended, because the 2.* version just launched not long, more unstable
Http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
You can use the MV to cut the downloaded installation package to the OPT directory
Then unzip with tar
Configure hadoop-env.sh Java environment variables after decompression is complete
Core-site.xml (Empty Pieces)
Configuration:
< Property><name>Hadoop.tmp.dir</name><value>/hadoop</value></ Property>< Property><name>Dfs.name.dir</name><value>/hadoop/name</value></ Property>< Property><name>Fs.default.name</name><value>hdfs://coder:9000</value>--Access Hadoop path and port number coder host name</ Property>
Hdfs-site.xml
1 < Property > 2 < name >dfs.data.dir</name>3<value >/hadoop/data</value>4</ Property>
Mapred-site.xml
1 < name >mapred.job.tracker</name>2<value >coder:9001</value--access Hadoop path and port number coder hostname
When configured, you need to initialize Hadoop's namenode before starting Hadoop--format file system
Executing Hadoop namenode-format
You can then start Hadoop with start-all.sh.
After successful startup, see if the process contains Hadoop-related processes through JPS.
Hadoop installation Configuration