I. New users and user groups
Note: (This step can actually be used, but it is better to use a different user alone)
1. Create a new user group
sudo addgroup Hadoop
2. Create a new user
sudo adduser-ingroup Hadoop Hadoop
3. Add Hadoop User Rights
sudo gedit /etc/sudoers
Add a Hadoop user after opening the Sudoer file
# User Privilege specificationroot all= (all:all) allhadoop all= (all:all) all
4. Log in with a Hadoop user
Second, install SSH
sudo apt-get install Openssh-server
After the installation is complete, start the service
Sudo/etc/init.d/ssh start
To see if the service started correctly: PS-E | grep ssh
Cluster, single-node mode requires SSH without password login, first set SSH no password to log on the machine.
Input command
SSH localhost
Enter Yes for first sign-in
Setting up password-free logins, generating private and public keys
Ssh-keygen-t rsa-p ""
Next we append the public key to Authorized_keys, which saves all the public key content that allows the user to log on to the SSH client as the current user.
Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
You can then use SSH Localhsot to log in without a password.
Use Exit to sign out
Third, install the Java environment
Previous tutorials are recommended to install Oracle's JDK, not recommended for OPENJDK, but according to Http://wiki.apache.org/hadoop/HadoopJavaVersions, the new version under OpenJDK 1.7 is no problem. Install openjdk 7 by command. Keng
Sudoapt-getinstall OPENJDK-7-JREOPENJDK-7-JDK
To view the installation results, enter the command: Java-version, the results are as follows to indicate that the installation was successful.
To view the installation results, enter the command: Java-version, the results are as follows to indicate that the installation was successful.
Iv. installation of Hadoop 2.4.1
2.4.1: http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.4.1/hadoop-2.4.1.tar.gz, installation tutorial mainly refer to the official tutorial/HTTP/ Hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/singlecluster.html.
After downloading, unzip to/usr/local/. Then modify the folder name to Hadoop
Give the user read and write permission to the folder (this problem is a pit, I was configured at the time by some method pits, do not understand the file permissions of things)
Some suggestions are:
sudo chmod 774/usr/local/hadoop
But after I use this command, the folders are all hidden and cannot be opened. Finally I deleted the Hadoop folder, using the following to solve.
sudo chown-r hadoop:hadoop /usr/local/hadoop
Configure ~/.BASHRC
Before you configure This file, you need to know the Java installation path to set the JAVA_HOME environment variable, you can use the following command line to view the installation path
Update-alternatives--config Java
The results of the implementation are as follows:
Configure the. bashrc file
sudo gedit ~/.BASHRC
#HADOOP VARIABLES startexport Java_home=/usr/lib/jvm/java-7-openjdk-i386export hadoop_install=/usr/local/ Hadoopexport path= $PATH: $HADOOP _install/binexport path= $PATH: $HADOOP _install/sbinexport hadoop_mapred_home=$ Hadoop_installexport hadoop_common_home= $HADOOP _installexport hadoop_hdfs_home= $HADOOP _installexport YARN_HOME=$ Hadoop_installexport hadoop_common_lib_native_dir= $HADOOP _install/lib/nativeexport hadoop_opts= "- Djava.library.path= $HADOOP _install/lib "#HADOOP VARIABLES END
Execute the following to make the added environment variable effective:
SOURCE ~/.BASHRC
Edit/usr/local/hadoop/etc/hadoop/hadoop-env.sh
Execute the following command to Open the edit window for the file
sudo gedit/usr/local/hadoop/etc/hadoop/hadoop-env.sh
Locate the Java_home variable and modify the variable as follows
Export java_home==/usr/lib/jvm/java-7-openjdk-i386
Five, Test WordCount
Stand-alone mode installation is complete, following the implementation of Hadoop's own instance wordcount verifying that the installation was successful
Create input folder under/usr/local/hadoop path
sudo mkdir input
Copy README.txt to input
CP README.txt Input
Executive WordCount
Bin/hadoop Jar Share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.4.0-sources.jar Org.apache.hadoop.examples.WordCount Input Output
Run as follows
Perform cat output/* to view character statistics results
The results are as follows
Ubuntu under hadoop2.4 build cluster (standalone mode)