Hadoop 1.2.1 Installation note 01:linux with password-free

Source: Internet
Author: User

Goal: Configure a Hadoop 1.2.1 test environment 650) this.width=650; "class=" Wlemoticon wlemoticon-smile "style=" Border-top-style: None;border-bottom-style:none;border-right-style:none;border-left-style:none, "alt=" Smile "src=" http:// Img1.51cto.com/attachment/201408/12/8976580_14078035062x6d.png "/>

The JDK used is: jdk-7u65-linux-x64.gz

The selected Hadoop is: hadoop-1.2.1.tar.gz

All sources Apache and Oracle Websites

Host Planning:

650) this.width=650; "title=" image "style=" border-top:0px;border-right:0px;background-image:none;border-bottom:0 px;padding-top:0px;padding-left:0px;margin:0px;border-left:0px;padding-right:0px; "border=" 0 "alt=" image "src=" Http://img1.51cto.com/attachment/201408/12/8976580_1407803506SvZC.png "height=" 141 "/>

Linux version: Centos 6.5 x64 bit

/ boot: Used to store the Linux system boot-related programs, such as boot loader, etc., the recommended size is 100MB.

/ usr: Used to store applications in Linux systems, with more relevant data, more than 3GB is recommended.

/ var: Used to store frequently changing data and log files in Linux systems, more than 1GB is recommended.

/home: The data for ordinary users is the host directory of ordinary users, the recommended size is the remaining space.

/: The root directory of the Linux system, all directories are hung under this directory, the recommended size is more than 5GB.

/ tmp: The temporary disk in a separate partition, you can avoid the system when the file system is full of stability affected. The recommended size is above 500MB.

Swap: Implements virtual memory, the recommended size is one or more times of physical memory.

650) this.width=650; "title=" clipboard "style=" border-right-width:0px;background-image:none;border-bottom-width:0 px;padding-top:0px;padding-left:0px;margin:0px;padding-right:0px;border-top-width:0px; "border=" 0 "alt=" Clipboard "src=" Http://img1.51cto.com/attachment/201408/12/8976580_1407803506KqVs.png "width=" 244 "height=" 166 "/ >

Boot Loader password is d*****2014

Install select Basic Server Otherwise all packages have to hit each other, it is really troublesome.


    • Configure sudo to give Hadoop user rights and configure sudo-free encryption

Useradd Hadoop

passwd Hadoop

Vi/etc/sudoers

Root all= (All) all

Hadoop all= (All) all add a row

Hadoop all= (All) nopasswd:all add a line password-free

650) this.width=650; "title=" clipboard[1] "style=" border-right-width:0px;background-image:none; border-bottom-width:0px;padding-top:0px;padding-left:0px;padding-right:0px;border-top-width:0px; "border=" 0 "alt = "Clipboard[1]" src= "http://img1.51cto.com/attachment/201408/12/8976580_1407803506sQuI.png" width= "244" height= " "/>

    • Configure the resolution of this machine

[email protected] /]$ cat/etc/hosts
10.15.5.200 Master.hadoop
10.15.5.201 Slave01.hadoop
10.15.5.202 Slave02.hadoop

    • Configure host Name

VI etc/sysconfig/network

Hostname=master.hadoop

    • Configure IP Address

[email protected] /]$ Cat/etc/sysconfig/network-scripts/ifcfg-eth0
Device=eth0
Type=ethernet
Uuid=721f9261-45d5-4335-9b47-64459173b2a9
Onboot=yes
Nm_controlled=yes
Bootproto=none
hwaddr=00:50:56:82:00:0f
ipaddr=10.15.5.200
Prefix=24
gateway=10.15.5.1
Defroute=yes
Ipv4_failure_fatal=yes
Ipv6init=no
Name= "System eth0"

    • Mount the installation disk so that the required installation package is installed. Make FTP package easy to install software

[email protected] home]# mount-t auto/dev/cdrom/home/cdrom
Mount:block Device/dev/sr0 is write-protected, mounting read-only

[email protected] packages]# RPM-IVH ftp-0.17-54.el6.x86_64.rpm

Warning:ftp-0.17-54.el6.x86_64.rpm:header V3 rsa/sha1 Signature, key ID C105b9de:nokey
Preparing ... ########################################### [100%]
1:ftp ########################################### [100%]

Goal: No password visits for master and slaves Hadoop accounts

Principle:

can be rough to understand as, I made a key, the way to send you, you in the access control system to write down my key, I take this key to open the door when your access control system directly released .

    • Master, log in to the Hadoop user and operate in the/home/hadoop directory

[[email protected]~]$ ssh-keygen-t RSA
Generating public/private RSA key pair.
Enter file in which to save the key (/HOME/HADOOP/.SSH/ID_RSA):
Created directory '/home/hadoop/.ssh '.
Enter passphrase (empty for no passphrase):
Enter same Passphrase again:
Your identification has been saved In/home/hadoop/.ssh/id_rsa.
Your public key has been saved in/home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
8d:81:09:c8:45:3f:c0:fb:3b:a0:cf:95:b6:dd:e9:b1[email protected]
The key ' s Randomart image is:
+--[RSA 2048]----+
| . == |
| O. + O |
| .= . |
| . . + |
| . S. |
| . .. |
| . .+. . |
| .. Ooo. + |
|. O ..... E |
+-----------------+

    • Under the hidden folder, a id_rsa.pub file is generated

[[email protected] . ssh]$ Ls-al/home/hadoop/.ssh    
Total    
drwx------. 2 hadoop hadoop 4096 Jul 22:09.    
drwx------. 3 hadoop hadoop 4096 Jul 22:09:    
-rw-------. 1 Hadoop hadoop 1671 Jul 30 22:09 Id_rsa    
-rw-r--r--. 1 hadoop hadoop 402 Jul 22:09 id_rsa.pub

[email protected] . ssh]$ Cat Id_rsa.pub
Ssh-rsa aaaab3nzac1yc2eaaaabiwaaaqeaugxddyelwx8urmervtpsntsw2mzoemzyzmikee3uqjmbgyocx0jv15/vtnoxjf4k+ s6hccajih2oemcc4bmmi99nfwcyd9zcrvfjvn/ dzhnw0yog6mymd9qw2bqwul265dhw2fncaecuyg2u1cxr0w9wzlvz54jltocrx6yuvwzzzgquw/or3zwe7pupqiohv0znyputbwew/ zj7n01lbvsknlqyy164apivfzbonpmxjs/h6b8/vcxsa0ldwaqndnmxj1iqhfkmntngqclkhs8oamvl+/a6nh2i0zbw+ vocuijbnknro9bkwgvuquzgckthepu0jk5erss6rpbrmq== [email protected]

    • Append the contents of this file to the Authorized_keys file.

[[email protected] . ssh]$ Cat id_rsa.pub >> authorized_keys    
[[email protected] . ssh]$ ls    
Authorized_keys Id_rsa id_ Rsa.pub    
[[email  Protected] . ssh]$ cat Authorized_keys    
Ssh-rsa aaaab3nzac1yc2eaaaabiwaaaqeaugxddyelwx8urmervtpsntsw2mzoemzyzmikee3uqjmbgyocx0jv15/vtnoxjf4k+ s6hccajih2oemcc4bmmi99nfwcyd9zcrvfjvn/ dzhnw0yog6mymd9qw2bqwul265dhw2fncaecuyg2u1cxr0w9wzlvz54jltocrx6yuvwzzzgquw/or3zwe7pupqiohv0znyputbwew/ zj7n01lbvsknlqyy164apivfzbonpmxjs/h6b8/vcxsa0ldwaqndnmxj1iqhfkmntngqclkhs8oamvl+/a6nh2i0zbw+ vocuijbnknro9bkwgvuquzgckthepu0jk5erss6rpbrmq== [email protected]

    • You need to modify the Authorized_keys file permissions, unsafe setting security settings, you will not be able to use the RSA function

[[email protected]. ssh]$ Ls-al
Total 20
DRWX------. 2 Hadoop hadoop 4096 Jul 30 22:20.
DRWX------. 3 Hadoop hadoop 4096 Jul 30 22:09.
-rw-rw-r--. 1 Hadoop hadoop 402 Jul 22:20 Authorized_keys
-RW-------. 1 Hadoop hadoop 1671 Jul 22:09 Id_rsa
-rw-r--r--. 1 Hadoop hadoop 402 Jul 22:09 id_rsa.pub
[[email protected]. ssh]$ sudo chmod authorized_keys
[[email protected]. ssh]$ Ls-al
Total 20
DRWX------. 2 Hadoop hadoop 4096 Jul 30 22:20.
DRWX------. 3 Hadoop hadoop 4096 Jul 30 22:09.
-RW-------. 1 Hadoop hadoop 402 Jul 22:20 Authorized_keys
-RW-------. 1 Hadoop hadoop 1671 Jul 22:09 Id_rsa
-rw-r--r--. 1 Hadoop hadoop 402 Jul 22:09 id_rsa.pub

    • Modify the/etc/ssh/sshd_config and confirm three of the options and then sudo service sshd restart services

[email protected] . ssh]$ sudo cat/etc/ssh/sshd_config
Rsaauthentication Yes
Pubkeyauthentication Yes
Authorizedkeysfile. Ssh/authorized_keys

    • Create the. ssh folder on all slave to change permissions

[[email protected] ~]$ mkdir. SSH    
[[email  protected] ~]$ ls-al 
Total    
drwx------. 3 Hadoop hadoop 4096 Jul 22:47. &nb Sp  
Drwxr-xr-x. 4 root root 4096 Jul 20:10.    
-rw-------. 1 hadoop Hadoop 401 Jul 20:58. Bash _history    
-rw-r--r--. 1 Hadoop hadoop, bash_logout    
-rw-r--r--. 1 Hadoop ha Doop 176 bash_profile   &NBSP
-rw-r--r--. 1 Hadoop hadoop 124 Jul bashrc    
D Rwxrwxr-x. 2 Hadoop hadoop 4096 Jul 22:47. SSH    ,
-rw-------. 1 hadoop hadoop 557 Jul 20:28. Viminfo

[[email protected] ~]$ sudo chmod. SSH    
[[ Email protected] ~]$ ls-al    
Total    
drwx------. 3 Hadoop hadoop 4096 Jul 30 22:47.    
Drwxr-xr-x. 4 root root 4096 Jul 20:10:    
-rw-------. 1 Hadoop Hadoop 401 Jul 30 20:5 8. bash_history    
-rw-r--r--. 1 Hadoop hadoop, bash_logout    
-rw-r--r--. 1 ha Doop Hadoop 176 Jul bash_profile    
-rw-r--r--. 1 Hadoop hadoop 124 Jul BASHRC   &NBS P
drwx------. 2 hadoop hadoop 4096 Jul 22:47. SSH    
-rw-------. 1 hadoop hadoop 557 Jul 20:28. Vimi NFO    
[[email  Protected] ~]$

    • Send id_rsa.pub to slaves.

[[email protected]. ssh]$ SCP Id_rsa.pub[email protected]:/home/hadoop/.ssh/id_rsa_frommaster.pub
The authenticity of host ' Slave01.hadoop (10.15.5.201) ' can ' t be established.
RSA key fingerprint is 76:14:2f:f9:d9:03:07:17:7c:d1:ad:1e:af:55:45:00.
Is you sure want to continue connecting (yes/no)? Yes
warning:permanently added ' slave01.hadoop,10.15.5.201 ' (RSA) to the list of known hosts.
[email protected]' s Password:
Id_rsa.pub 100% 402 0.4kb/s 00:00
[[email protected]. ssh]$ SCP Id_rsa.pub[email protected]:/home/hadoop/.ssh/id_rsa_frommaster.pub
The authenticity of host ' Slave02.hadoop (10.15.5.202) ' can ' t be established.
RSA key fingerprint is 76:14:2f:f9:d9:03:07:17:7c:d1:ad:1e:af:55:45:00.
Is you sure want to continue connecting (yes/no)? Yes
warning:permanently added ' slave02.hadoop,10.15.5.202 ' (RSA) to the list of known hosts.
[email protected]' s Password:
Id_rsa.pub 100% 402 0.4kb/s 00:00

    • Append this value to the Authorized_keys inside the slaves and modify the security permissions

[email protected] . ssh]$ Cat id_rsa_frommaster.pub >> Authorized_keys
[email protected] . ssh]$ sudo chmod-authorized_keys

    • Modify the/etc/ssh/sshd_config content on the slaves host and restart the SSHD service

[email protected]slave01 ssh]$ sudo cat/etc/ssh/sshd_config
Rsaauthentication Yes
Pubkeyauthentication Yes
Authorizedkeysfile. Ssh/authorized_keys

[email protected] . ssh]$ sudo service sshd restart
stopping sshd: [OK]
Starting sshd: [OK]



Final Verification method

[email protected] . ssh]$ SSH Slave01.hadoop

Last login:wed Jul-23:00:25 from Master.hadoop

[email protected] ~]$ Logout

Connection to Slave01.hadoop closed.

[email protected] . ssh]$ SSH Slave02.hadoop

[email protected] ~]$ Logout

Connection to Slave02.hadoop closed.

Similarly: from slaves to master

    1. Slaves generating the public key

    2. Append the slaves slave's public key to "Authorized_keys" in Master's ". SSH" folder

[email protected] . ssh]$ Cat Id_rsa.pub

Ssh-rsa aaaab3nzac1yc2eaaaabiwaaaqeaqkrurgiyd3w36wprpl9ifjjgnve+1r4x4mrhimkxofmw+d3dcbi9fwe2j0/h+ namdlwjqyw685itdfhni0x5la7yxy6eie0fqb/nxdkvslc44ruzjqjkqkosqsy/hgvfeff7ozifcecqvqdqn+ opwlrbxntmo1uuhg2tfvj3msbgwwhf7fc+ usn7y7bmzljpkhavejljjytahekj8wmnzgt160vgr0mizawdlrxkrls2htwqnndf74zjdceqkgja6rukuqblop5x/ 0lvhbbukn3madnjulizd5pjw/afevysypfkgztspj3+m8gnfqkqyjib6sbffguyt1ipl/gnrw== [email protected]

[email protected] ~]$ cat. ssh/id_rsa.pub

Ssh-rsa aaaab3nzac1yc2eaaaabiwaaaqea3mhgmqogjikc/elc/4copgvkcq7adtqv87dhqpwnddlljepvy/ gbcwyqp4h8cmtmsmnretva8rlbpsip3jgmrbvndccumo+mzga034yr6jweki9zvikzyscctqwck6w5hs3u/pnb1ym6a46ho+ dnem42qiayzrhrn9fc1f9hd3/dxrq0kzlo/5xmkuhft1gky+gs+l7mik6y7ptisx+ox/mdjdqfzxfpguro68xx54+dd0gonsb/ mavvmdqiwk3fh88oun23ski/cnzd1vlsm55kpt6zro792qcfmme7ciyc6dwdkkzbgkdheaywy5ppfszrgfdd/ervcwrmofdhiw== [Email Protected]

can be appended with cat >>, or VI written directly. The key, let master know the line.

650) this.width=650; "title=" clipboard[6] "style=" border-top:0px;border-right:0px;background-image:none; border-bottom:0px;padding-top:0px;padding-left:0px;border-left:0px;padding-right:0px, "border=" 0 "alt=" clipboard [6] "src=" http://img1.51cto.com/attachment/201408/12/8976580_1407803506N2fv.png "height="/>

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.