Various tangle period Ubuntu installs countless times Hadoop various versions tried countless times tragedy then see this www.linuxidc.com/Linux/2013-01/78391.htm or tragedy, slightly modified
First, install the JDK
1. Download and install
sudo apt-get install OPENJDK-7-JDK
Required to enter the current user password when entering the password, enter;
Required input yes/no, enter Yes, carriage return, all the way down the installation completed;
2. Enter java-version on the command line to see if the installation was successful
3. Configure Environment variables
Edit File/etc/profile,
sudo gedit/etc/profile
Add the following three lines to the bottom of the file
Export JAVA_HOME=/USR/LIB/JVM/JAVA-6-OPENJDK (the actual directory where JAVA resides)
Export path= $JAVA _home/bin: $PATH
Export classpath=.: $JAVA _home/lib/: $CLASSPATH
Second, configure SSH password-free login
1. Install the required software
sudo apt-get install install SSH
2. Configure SSH password-free login
Ssh-keygen-t Dsa-p "-F ~/.SSH/ID_DSA
Cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
3. Verify Success
SSH localhost
Third, install Hadoop
1, download Hadoop to the server,
archive.apache.org/dist/hadoop/common/hadoop-1.0.4/
In fact, download to the home download that folder, you need to move to//, and then you can do the next decompression
2. Decompression
TAR-XVF Hadoop-1.0.4.tar
3. Configure Hadoop
here is a description: A lot of installation tutorials will put Hadoop under the/usr/local/so that the following changes in the file will need to be through the terminal and with administrator privileges to do, but this tutorial is/home, so you can directly gedit open in the inside to modify There is no need to open through the terminal
The following four configuration files are in the conf/directory of the Hadoop decompression folder
(1), Configuration hadoop-env.sh
Modify Java_home:
Export JAVA_HOME=/USR/LIB/JVM/JAVA-7-OPENJDK-AMD64
(2), Configuration Core-site.xml
Modify the Hadoop core configuration file Core-site.xml, which is configured with the address and port number of HDFs
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
(3), Configuration Hdfs-site.xml
Modify the configuration of HDFs in Hadoop, the configured backup method defaults to 3, because the installation is a standalone version, so you need to change to 1
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
(4), Configuration Mapred-site.xml
Modify the configuration file for MapReduce in Hadoop, configured with the address and port of Jobtracker
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
4. Initialize HDFs
Be sure to do this before executing the following command the contents of the extracted hadoop-1.0.4 folder are placed directly under/home
Bin/hadoop Namenode-format
5. Start All Hadoop services
bin/start-all.sh
6. Verify that the installation is successful
If the terminal input JPS out of 6
Open your browser and enter the following URLs:
http://localhost:50030 (Web page for MapReduce)
http://localhost:50070 (HDFS Web page)
If successful access is available, the Hadoop installation is successful
Ubuntu: Installation configuration Hadoop 1.0.4 for Hadoop beginners