Excerpt a
On the 192.168.42.142 machine.
1) Run: ssh-keygen-t RSA
2) Then take two returns (select default)
3) Run:
Ssh-copy-id-i/root/.ssh/id_rsa.pub [email protected]
or ordinary users:
Ssh-copy-id [email protected]
4) Re-enter the root password on the 163 machine
At this point, ssh to the 163 machine, you do not need a password. SCP between each other, and no password required
Excerpt two
To configure SSH to implement the user's password-free access between MPI nodes, because MPI parallel programs need to transfer information between nodes, it is necessary to implement no password access between all nodes 22.
(1) generates the private key ID_DSA and public key id_dsa.pub, as follows.
ssh-keygen-t RSA
system displays some information, The system asks you to enter directly.
(2) Use the key as authentication for access authorization. Follow the command below to execute the C1.
CP ~/.ssh/id_dsa.pub ~/.ssh/authorized_keys
because we are using the root user, so ~ stands for/root,
(3) will ~/ The files in the. SSH directory are copied to all nodes.
scp-r ~/.ssh/* c2:/root/.ssh
(4) Check if you can directly ( No password required) to log on to other nodes.
ssh C2
If no password is required to log on to other nodes between 22 Indicates that the configuration was successful.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
(http://www.cnblogs.com/xia520pi/archive/2012/05/16/2503949.html)
2, SSH no password authentication configuration
The remote Hadoop daemon needs to be managed during Hadoop operation, and after Hadoop is started, Namenode starts and stops various daemons on each datanode through SSH (Secure Shell). This must be executed between the nodes when the command is not required to enter the form of a password, we need to configure SSH to use the form of non-password public key authentication, so that namenode use SSH without password login and start the dataname process, the same principle, Datanode can also log on to Namenode using SSH without a password.
2.1 Installing and starting the SSH protocol
When installing CentOS6.0 in "Hadoop cluster (phase 1th)", we chose some basic installation packages, so we need two services: SSH and rsync are already installed. The following commands can be used to view the results shown below:
Rpm–qa | grep OpenSSH
Rpm–qa | grep rsync
If you do not have SSH and rsync installed, you can install it using the command below.
Yum install SSH installation SSH protocol
Yum Install rsync (rsync is a remote Data Sync tool that allows fast synchronization of files between multiple hosts via Lan/wan)
Service sshd Restart Startup services
Ensure that all the servers are installed, the above command is completed, each machine can be verified by password mutual login.
2.2 Configure master No password login all salve
1) SSH no password principle
Master (NameNode | Jobtracker) as a client, to implement password-free public key authentication, connect to the server salve (DataNode | Tasktracker), you need to generate a key pair on master, including a public key and a private key, and then copy the public key to all slave. When Master connects to salve via SSH, Salve generates a random number and encrypts the random number with the master's public key and sends it to master. Master receives the encryption number and then decrypts it with the private key, and then passes the decryption number back to Slave,slave to confirm that the decryption number is correct and allows master to connect. This is a public key authentication process, in which the user does not need to manually enter the password. The important process is to copy the client master to the slave.
2) Create a password pair on the master machine
Execute the following command on the master node:
Ssh-keygen–t Rsa–p '
This life is generated with its no-password key pair , which asks for its path to be saved directly by using the default path. Generated key pairs: Id_rsa and id_rsa.pub, which are stored by default in the "/home/hadoop/.ssh" directory.
Check to see if there are ". SSH" folders under "/home/hadoop/" and if there are two newly-produced no-password key pairs under the ". SSH" file.
Then make the following configuration on the master node and append the id_rsa.pub to the authorized key.
Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
There are two things to do before validating. The first thing is to modify the file "authorized_keys" permissions ( The settings of the permissions are very important, because unsafe settings for security settings, you can not use the RSA function ), the other thing is to use The root user sets thecontents of "/etc/ssh/sshd_config". Make it valid without password login.
1) Modify the file "authorized_keys"
chmod ~/.ssh/authorized_keys
Note: If you do not set it, you will be prompted to enter your password at the time of verification, and it takes nearly half a day to find the reason. On the internet to find a few good articles, as "Hadoop Cluster _ 5th period supplement _jdk and SSH no password configuration " to help additional learning.
2) set up SSH configuration
Modify the following contents of the SSH configuration file "/etc/ssh/sshd_config" with the root user Login server.
Rsaauthentication Yes # Enable RSA authentication
Pubkeyauthentication Yes # Enable public key private key pairing authentication method
Authorizedkeysfile. Ssh/authorized_keys # Public key file path (same as the file generated above)
After Setup, remember to restart the SSH service before you can make the settings valid.
Service sshd Restart
log out of root login and verify success with Hadoop Normal user.
SSH localhost
learned that no password login This level has been set, the next thing is to copy the public key on all the slave machine. Use the following command format to copy the public key:
SCP ~/.ssh/id_rsa.pub Remote User name @ remote server ip:~/
For example:
SCP ~/.ssh/id_rsa.pub [Email protected]:~/
The above command is " Hadoop" for users who copy the file "Id_rsa.pub" to the server IP "192.168.1.3" /home/hadoop/"below.
The following is a configuration for the Slave1.hadoop node with IP "192.168.1.3".
1) Copy the public key on the Master.hadoop to the Slave1.hadoop
From which we learned that the file "Id_rsa.pub" has passed, because there is no password connection, so when connecting, still want to prompt input slave1.hadoop server user Hadoop password. To ensure that the file has been passed, log in to the slave1.hadoop:192.168.1.3 server with SECURECRT and see if the file exists under "/home/hadoop/".
From the above we learned that we have successfully copied the public key past.
2) Create the ". SSH" folder under "/home/hadoop/ "
This step is not necessary , if the "/home/hadoop" in Slave1.hadoop already exists there is no need to create, because we have not previously done a password-free login configuration for the slave machine, Therefore, the file does not exist. Create it with the following command. ( Note: log in to the system with Hadoop, if not involved in system file modification, in general, we have established the ordinary user Hadoop to execute the command.) )
mkdir ~/.ssh
Then modify the folder ". SSH" User rights, change his permissions to "" ", with the following command to execute:
chmod ~/.ssh
Note: if not, even if you set the "Authorized_keys" permission according to the previous action, and configured "/etc/ssh/sshd_config", also restarted the sshd service, in master can use "ssh localhost "to login without password, but login to Slave1.hadoop still needs to enter the password, because the". SSH "folder permissions are not set. This folder ". SSH" is automatically generated when the system is configured for SSH login without password, the permissions are automatically "700", if it is created manually, its group permissions and other permissions are there, which will cause the RSA password-free remote login failure.
Comparing the two images above, the Discovery folder ". SSH" permission has changed.
3) Append to authorization file "authorized_keys"
So far Master.hadoop's public key has also been, the folder ". SSH" is also available, and the permissions have been modified. This step is to append the Master.hadoop's public key to Slave1.hadoop's authorization file "Authorized_keys". Use the following command to append and modify the "Authorized_keys" File permissions:
Cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
chmod ~/.ssh/authorized_keys
4) Modify "/etc/ssh/sshd_config" with the root user
The specific steps refer to the "Setting up SSH Configuration" in the previous Master.hadoop, which is divided into two steps: The 1th is to modify the configuration file, and the 2nd is to restart the SSH service.
5) login with master.hadoop using SSH without password Slave1.hadoop
When the previous steps are set, you can use the following command format to SSH without password login.
SSH Remote server IP
From our main 3 places, the 1th is SSH password-free login command, 2nd, 3 is the login before and after the "@" after the machine name changed from "Master" into " Slave1", this means that we have successfully implemented SSH login without password.
Finally, remember to remove the "id_rsa.pub" file from the "/home/hadoop/" directory.
Rm–r ~/id_rsa.pub
So far, we have achieved through the first 5 steps from "Master.hadoop" to "slave1.hadoop" ssh password-free login, the following is to repeat the above steps to the remaining two (Slave2.hadoop and Slave3.hadoop) Slave server for configuration. In this way , we have completed the "Configure master no password login all slave server".
2.3 Configure all slave no password login master
and master no password login All slave the same principle, is to append the slave public key to Master's ". SSH" folder under "Authorized_keys", remember is appended (> >).
In order to illustrate the situation, we now use "Slave1.hadoop" no password Login "Master.hadoop" as an example, to carry out the operation, but also to consolidate The knowledge previously learned, the remaining "Slave2.hadoop" and " Slave3.hadoop "Just follow this example to do it.
First create a "slave1.hadoop" own public and private key, and append your own public key to the "Authorized_keys" file. The following commands are used:
Ssh-keygen–t Rsa–p '
Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Next is tocopy the public key "Id_rsa.pub" from "Slave1.hadoop" to "Master.hadoop" in the "/home/hadoop/" directory with the command "SCP" and append to " Master.hadoop "in the" Authorized_keys ".
1) operation on "Slave1.hadoop" Server
The following commands are used:
SCP ~/.ssh/id_rsa.pub [Email protected]:~/
2) operation on "Master.hadoop" Server
The following commands are used:
Cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
Then delete the "id_rsa.pub" file that you just copied over.
Finally, test the login from "Slave1.hadoop" to "Master.hadoop" without password.
From the above results can be seen has been successfully implemented, and then try from "Master.hadoop" to "Slave1.hadoop" without password login.
At this point "Master.hadoop" and "Slave1.hadoop" can be no password login with each other, the rest is to follow the above steps to the remaining "Slave2.hadoop" and "Slave3.hadoop" and "Master.hadoop" Establish a password-free login between the This way, master can log on to each slave without password authentication, and each slave can log on to master without password authentication.
SSH requires no password key login