Ubuntu14.04lts under Java+hadoop

Source: Internet
Author: User
Tags decrypt java se

First install the Ubuntu14.04lts in the virtual machine

Pre-work:

Update source, update install vim, install Vmtools tool, install Chinese Input method

sudo apt-get updatesudo apt-get install vim

The path to the Linux header could not be found when installing VMware tools:

Perform the following steps:

apt-get updatesudo Apt-get install build-essentialsudo apt-get Install linux-headers-$ (uname-/lib/modules/$ (uname-r)/build/include/-S. /generated/-S. /generated/-S. /generated/uapi/linux/version.h run sudo again. /vmware-install.pl

Install the Chinese input method in the English version of Ubuntu14.04:

    1. First, install Chinese language support in the System Setup language
    2. Then add the Pinyin ime in the IBUs Text Entry settings Setup Panel
    3. The default is win+space to switch between input methods

Modify Host Name:

Your computer name is modified with the sudo gedit/etc/hostname file

Assume a total of 3 nodes (master, slave1, Slave2)

Native and sub-node IP configuration files:

sudo gedit/etc/hosts192.168. 1.100 Master 192.168. 1.101 slave1 192.168. 1.102 Slave2

Another: IP configuration command: ifconfig eth0 xxx

(Network card and IP should be configured according to their actual situation)

Create user groups and users:

-ingroup Hadoop Hadoop

To add (root) permissions to a user:

sudo gedit/etc/sudoersadd Hadoop all= (All:all) all under root all = (All:all) all

Installing SSH

sudo apt-get install rsync

sudo apt-get install SSH (this installs automatically when the Openssh-server is installed)
Ssh-keygen-t dsa-f ~/.SSH/ID_DSA (Generate secret key, DSA or RSA type, without setting a password, so you can log in automatically)
Cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys (Place the public key in the Authorized_keys authorization file)
SSH localhost

Method Two:

sudo ssh-keygen-t rsa-p "" (This will not prompt for a password and then generate the private key and public key under the. SSH folder: Id_rsa, Id_rsa.pub)

Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys (Generate Authorized_keys authorization file using public key)
[Note]:

Check that the SSH service is started:

Ps-e | grep ssh

Turn on restart Shutdown SSH service: sudo/etc/init.d/ssh start (restart/stop)

SSH configuration file in/etc/ssh/sshd_config

After installing SSH, you can telnet to Ubuntu via Putty software input IP in Windows

For more detailed putty and SSH use see: http://www.linuxidc.com/Linux/2013-07/87368.htm

Explanation of public and private keys see: http://blog.csdn.net/tanyujing/article/details/17348321

The RSA (asymmetric) public key is used to encrypt and decrypt the signature, and the private key is used to decrypt and seal. DSA is faster and is used for authentication.

JAVA SE Download

[note] You can also use the command line tool directly under Ubuntu: sudo apt-get install open Jdk-6-jre

The Java JDK is installed by default into the/usr/lib/jvm/java-6-openjdk-i386 directory.

To manually download the installation and configuration :

Address: http://www.oracle.com/technetwork/java/javase/downloads/index.html

Download here is the Java SE 7u67

java7api:http://docs.oracle.com/javase/7/docs/api/

Place the downloaded Java JDK in the ~/setup/java-jdk-7u67 directory:

Extract the JDK to the system directory:
cd/usr/local/-zxvf/home/sunny/setup/java-jdk-7u67/jdk-7u67-linux--r/usr/local/lib/jdk1. 7 755 -R/usr/local/lib/jdk1. 7. 0_67

Modify the system variables to configure the Java environment:

gedit/etc/profileexport java_home=/usr/local/lib/jdk1. 7 . 0_67export CLASSPATH=.: $JAVA _home/jre/lib/rt.jar: $JAVA _home/lib/dt.jar: $JAVA _home/lib/  Tools.jarexport PATH= $PATH: $JAVA _home/bin

Make it effective: source/etc/profile

To check if the installation was successful: Java-version

Download hadoop1.2.1

Website: Http://mirrors.ibiblio.org/apache/hadoop/common

Download tar.gz version

Place the downloaded file in the ~/setup/hadoop directory, and then create the directory in your home directory ~/USR

Cd
Mkdir-p ~/usr/hadoop
CD ~/usr/hadoop
TAR-ZXVF ~/setup/hadoop/hadoop-2.4.1.tar.gz
CD ~/usr/hadoop/hadoop-2.4.1/etc/hadoop

Modifying a configuration file

Core-site.xml, Mapred-site.xml, Hdfs-site.xml, hadoop-env.sh:

Programme one:

Pseudo-distributed configuration

core-site.xml:<configuration>    <property>        <name>fs.default.name</name>        < value>hdfs://localhost:9000</value>    </property></configuration>---------------------- --------------------------------mapred-site.xml:<configuration>    <property>        <name> mapred.job.tracker</name>        <value>localhost:9001</value>    <property></ Configuration>-------------------------------------------------------hdfs-site.xml:<configuration>    <property>        <name>dfs.replication</name>        <value>1</value>    < /property></configuration>-------------------------------------------------------Hadoop-env.sh:export java_home=/usr/local/lib/jdk1.7.0_67

Format the Namenode file system

CD ~/USR/HADOOP-1.2.1/

Bin/hadoop Namenode-format

Open service:

CD ~/usr/hadoop-1.2.1/bin

./start-all.sh

To view the nodes that are started:

JPs

To view the system running:

Enter localhost:50070 in the browser

Scenario Two:

Distributed configuration:

core-site.xml:<configuration> <property> <name>fs.default.name</name> &LT;VALUE&G T;hdfs://master:9000</value> </property></configuration>-------------------------------------         --------------mapred-site.xml:<configuration> <property> <name>mapred.job.tracker</name> <value>master:9001</value> <property></configuration>----------------------------------        ------------------hdfs-site.xml:<configuration> <property> <name>dfs.name.dir</name> &LT;VALUE&GT;~/USR/HADOOP-1.2.1/DATALOG1,~/USR/HADOOP-1.2.1/DATALOG2 </value> </property> &lt ;p roperty> <name>dfs.data.dir</name> <value>~/usr/hadoop-1.2.1/data1,~/usr/hadoop-1.2.1        /data2 </value> </property> <property> <name>dfs.replication</name> <value>1</value>   </property></configuration>-------------------------------------------------------------------- Hadoop-env.sh:export java_home=/usr/local/lib/jdk1.7.0_67------------------------------------------------------ --------------Modify masters and slaves files gedit ~/usr/hadoop-1.2.1/conf/mastersmastergedit ~/usr/hadoop-1.2.1/conf/ Slaves fill in Slave1slave2

Format the Namenode file system

CD ~/USR/HADOOP-1.2.1/

Bin/hadoop Namenode-format

Replication nodes:

In the machine slave1 and slave2:

    1. Installing the JDK
    2. Install SSH, copy the master's public key rsa_id.pub to the slave node, and generate the Authorized_keys file
    3. Copy the ~/usr/hadoop-1.2.1 already configured in master to the slave node and delete the Datalog and data files in the hadoop-1.2.1 directory in the child nodes.

Open service slightly ...

Open the service separately:

Open Namenode and secondarynamenode:bin/start-dfs.sh

Open datanode:bin/hadoop-daemon.sh Start Datenode

Open tasktracker:bin/hadoop-daemon.sh Start Tasktacker




Ubuntu14.04lts under Java+hadoop

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.