Big Data note 13:hadoop installation of Hadoop configuration installation

Source: Internet
Author: User

1. Preparing the Linux environment
1 Click the VMware shortcut, right-click to open the file, double-tap Vmnetcfg.exe, VMnet1 host-only, modify subnet IP settings network segment: 192.168.8.0 Subnet mask: 255.255.255.0, apply, OK
back to Windows--open network and Sharing Center, change adapter settings, right-VMnet1 properties, double-click IPv4, set Windows ip:192.168.8.100 Subnet mask: 25 5.255.255.0-click OK
On the virtual software--my computer --------right-----Settings network adapter,
1.1 Modifying host names
vim/etc/sysconfig/network

Networking=yes
hostname=itcast01 # # #

1.2 Modifying IP
two different ways:
First: Modify with the Linux GUI (highly recommended)
go to the Linux GUI-right click on the two small computers in the upper right-click Edit Connections, select Current network system eth0, click the Edit button, and then select IPv4, met Hod Select Manual, click the Add button, add the ip:192.168.8.118 subnet mask: 255.255.255.0 gateway: 192.168.1.1, apply

The second kind: Modify the configuration file way (The Cock Silk program ape dedicated)
Vim/etc/sysconfig/network-scripts/ifcfg-eth0

device= "Eth0"
bootproto= "Static" # # #
hwaddr= "00:0c:29:3c:bf:e7"
ipv6init= "Yes"
nm_controlled= "Yes"
onboot= "Yes"
type= "Ethernet"
uuid= "ce22eeca-ecde-4536-8cc2-ef0dc36d4a8c"
ipaddr= "192.168.8.118" # # #
netmask= "255.255.255.0" # # #
gateway= "192.168.8.1" # # #

1.3 Modifying the mappings between host names and IPs
vim/etc/hosts

192.168.8.118itcast01

1.4 Shutting down the firewall
#查看防火墙状态
Service iptables Status
#关闭防火墙
Service iptables Stop
#查看防火墙开机启动状态
chkconfig iptables--list
#关闭防火墙开机启动
chkconfig iptables off

1.5 restarting Linux
reboot

2. Installing the JDK
2.1 Uploads

2.2 Unpacking the JDK
#创建文件夹
Mkdir/usr/java
#解压
TAR-ZXVF jdk-7u55-linux-i586.tar.gz-c/usr/java/

2.3 Adding Java to an environment variable
Vim/etc/profile
#在文件最后添加
Export java_home=/usr/java/jdk1.7.0_55
export path= $PATH: $JAVA _home/bin

#刷新配置
Source/etc/profile

3. Installing hadoop2.4.1
Note: The hadoop2.x configuration file $hadoop_home/etc/hadoop
Pseudo-distributed requires 5 configuration files to be modified
3.1 Configuring Hadoop
first one: hadoop-env.sh
Vim hadoop-env.sh
#第27行
Export java_home=/usr/java/jdk1.7.0_65

The second one: Core-site.xml
<!--The address of the boss of HDFs (NameNode)- -
<property>
<name>fs.defaultFS</name>
<value>hdfs://itcast01:9000</value>
</property>
<!--Specify the storage directory where the Hadoop runtime generates files--
<property>
<name>hadoop.tmp.dir</name>
<value>/itcast/hadoop-2.4.1/tmp</value>
</property>

The third one: Hdfs-site.xml
<!--Specify the number of HDFs replicas-
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

Fourth: mapred-site.xml (MV Mapred-site.xml.template mapred-site.xml)
MV Mapred-site.xml.template Mapred-site.xml
Vim Mapred-site.xml
<!--Specify Mr to run on yarn --
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

Fifth one: Yarn-site.xml
<!--Specify the address of yarn's boss (ResourceManager)- --
<property>
<name>yarn.resourcemanager.hostname</name>
<value>itcast01</value>
</property>
<!--Reducer How to get data--
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

3.2 Adding Hadoop to an environment variable

Vim/etc/proflie
Export java_home=/usr/java/jdk1.7.0_65
Export hadoop_home=/itcast/hadoop-2.4.1
export path= $PATH: $JAVA _home/bin: $HADOOP _home/bin: $HADOOP _home/sbin

source/etc/profile

3.3 formatting Namenode (initialization of Namenode)
HDFs Namenode-format (Hadoop namenode-format)

3.4 start Hadoop
start HDFs First
sbin/start-dfs.sh

restart yarn
sbin/start-yarn.sh

3.5 Verify that the boot was successful
use the JPS command to verify
27408 NameNode
28218 Jps
27643 secondarynamenode
28066 NodeManager
27803 ResourceManager
27512 DataNode

/http/ 192.168.8.118:50070 (HDFs management interface)
http://192.168.8.118:8088 (Mr Management interface)

4. Configure SSH Free login
#生成ssh免登陆密钥
#进入到我的home目录
CD ~/.ssh

Ssh-keygen-t RSA (four return)
After executing this command, two files Id_rsa (private key), id_rsa.pub (public key) will be generated
Copy the public key to the machine that you want to avoid landing on
Ssh-copy-id localhost

Big Data note 13:hadoop installation of Hadoop configuration installation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.