First, install the jdk:http://www.cnblogs.com/e-star/p/4437788.html
Second, configure SSH password-free login
1. Install the required software
sudo apt-get install install SSH
2. Configure SSH password-free login
Ssh-keygen-t Dsa-p "-F ~/.SSH/ID_DSA
Cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
3. Verify Success
SSH localhost
Third, install Hadoop
1. Download Hadoop to the server
2. Decompression
TAR-XVF Hadoop-1.0.4.tar
3. Configure Hadoop
The following four configuration files are in the conf/directory of the Hadoop decompression folder
(1), Configuration hadoop-env.sh
Modify Java_home:
Export Java_home=/usr/lib/jvm/jdk1.6.0_35
(2), Configuration Core-site.xml
Modify the Hadoop core configuration file Core-site.xml, which is configured with the address and port number of HDFs
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
(3), Configuration Hdfs-site.xml
Modify the configuration of HDFs in Hadoop, the configured backup method defaults to 3, because the installation is a standalone version, so you need to change to 1
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
(4), Configuration Mapred-site.xml
Modify the configuration file for MapReduce in Hadoop, configured with the address and port of Jobtracker
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
4. Initialize HDFs
Bin/hadoop Namenode-format
5. Start All Hadoop services
bin/start-all.sh
6. Verify that the installation is successful
Open your browser and enter the following URLs:
http://localhost:50030 (Web page for MapReduce)
http://localhost:50070 (HDFS Web page)
If successful access is available, the Hadoop installation is successful
Ubuntu installs Hadoop pseudo-distributed