Installation under the Hadoop Ubuntu

Source: Internet
Author: User
Keywords installation Name value ssh xml
This is the experimental version in your own notebook, in the unfamiliar situation or consider the installation of a pilot version of their own computer, and then consider installing the deployment of the production environment in the machine. First of all, you need to install a virtual machine VMware WorkStation on your own computer, after installation, and then install the Ubutun operating system on this virtual machine, I installed the Ubutun 11.10, can be viewed through the lsb_release-a command, If you do not have this command, you can install the sudo apt install LSB using the following command.


1.&http://www.aliyun.com/zixun/aggregation/37954.html ">nbsp; Create a new account on this operating system Hadoop.


tinyfun@ubuntu:/home$ sudo addgroup hadoop


adding group ' Hadoop ' (GID 1001) ...


done.


then adds Hadoop users: sudo adduser-ingroup hadoop hadoop after the sudo gedit/etc/sudoers command in sudoers file to add Hadoop all= (all:all) all Give the Hadoop user root permissions. The default name for the machine is Ubuntu, and if you need to change it, use sudo gedit/etc/hostname.


2. Establish SSH password-free login


First sudo apt install SSH openssh-server service, then create Ssh-key, use command ssh-keygen-t rsa-p ""

After
carriage return will generate two files under ~/.ssh/: Id_rsa and id_rsa.pub two files, enter ~/.ssh/directory, append id_rsa.pub to


In the Authorized_keys authorization file, there is no Authorized_keys file at first, using the command: Cat id_ras.pub >> Authorized_keys


then uses SSH localhost to log in successfully.


3. Installing Hadoop


creates a folder Hadoop in the Hadoop account root directory, using


wget http://mirror.bjtu.edu.cn/apache/hadoop/common/hadoop-1.1.1/hadoop-1.1.1-bin.tar.gz Downloads the latest version of Hadoop, It is now a stable version of 1.1.1. Extract to current folder TAR-XVF hadoop-1.1.1-bin.tar.gz then use command MV


hadoop-1.1.1 Hadoop, then began to configure Hadoop, there are four files need to be configured, here is the simplest configuration, many parameters are using the Hadoop system default, the four files are in the Conf directory, hadoop-env.sh Core-site.xml Hdfs-site.xml


Mapred-site.xml.


1) Modify hadoop-env.sh


sudo gedit hadoop-env.sh to change one of the rows to export java_home=/usr/lib/jvm/java-6-sun-1.6.0.24


is to configure the Java Virtual machine's running directory, which is where you install Java on your machine.


2) Modify Core-site.xml


<configuration>


<property>


<name>hadoop.tmp.dir</name>


<value>/home/hadoop/hadoop/hadoop/tmp</value>


</property>


<property>


<name>fs.default.name</name>


<value>hdfs://localhost:9000</value>


</property>


</configuration>


3) Modify Hdfs-site.xml


<configuration>


<property>


<name>dfs.replication</name>


<value>1</value>


</property>


</configuration>


4) Modify Mapred-site.xml


<configuration>


<property>


<name>mapred.job.tracker</name>


<value>localhost:9001</value> </property>


</configuration>
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.