Hadoop's server Infrastructure setup

Source: Internet
Author: User

This article takes the K-master server basic Environment configuration as an example to demonstrate user Configuration, sudo permissions configuration, network configuration, firewall shutdown, installation of JDK tools, and so on. Users need to follow these steps to complete the KVMSLAVE1~KVMSLAVE3 server's basic environment configuration.

Development environment

Hardware environment: CentOS 6.5 server 4 (one for master node, three for slave node) Software Environment: Java 1.7.0_45, Hadoop-1.2.1

1. Installation Environment

Hardware environment: CentOS 6.5 server 4 (one for master node, three for slave node)

Software Environment: Java 1.7.0_45, hadoop-1.2.1

2. User Configuration

1) Add a user

[[email protected] hadoop]$ adduser hadoop                       #新建hadoop用户[[email protected] hadoop]$ passwd hadoop                            #hadoop用户设置密码

2) building a working Group

[[email protected] hadoop]$ groupadd hadoop                      #新建hadoop工作组

3) Add Working Group to existing users

[[email protected] hadoop]$ usermod -G hadoop hadoop
2, sudo permissions configuration

1) Create a new user group admin

[[email protected] hadoop]# groupadd admin

2) Add an existing user to the Admin user group

[[email protected] hadoop]# usermod -G admin,hadoop hadoop

3) give modify/etc/sudoers file Write permission

4) Edit the/etc/sudoers file

[[email protected] hadoop]# vi /etc/sudoers缺省只有一条配置: root    ALL=(ALL) ALL 在下边再加一条配置: %admin    ALL=(ALL) ALL

This way the Admin user group has sudo permissions, and the Hadoop user belonging to the Admin user group also has sudo permissions.

5) Reduce permissions when editing is complete

[Email protected] hadoop]$ chmod u-w/etc/sudoers

3. Network Configuration

1) Configure IP address

The detailed configuration information is as follows:

[[email protected] hadoop]$ su hadoop                #切换为hadoop用户[[email protected] hadoop]$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0HWADDR=06:8D:30:00:00:27TYPE=EthernetBOOTPROTO=staticIPADDR=192.168.100.147PREFIX=24GATEWAY=192.168.100.1DNS1=192.168.100.1DEFROUTE=yesIPV4_FAILURE_FATAL=yesIPV6INIT=noNAME=eth0UUID=660a57a1-5edf-4cdd-b456-e7e1059aef11ONBOOT=yesLAST_CONNECT=1411901185

2) Restart the Network service for network settings to take effect

[[email protected] hadoop]$ sudo service network restartShutting down interface eth0:  Device state: 3 (disconnected)                                                    [  OK  ]Shutting down loopback interface:                   [  OK  ]Bringing up loopback interface:                     [  OK  ]Bringing up interface eth0:  Active connection state: activatedActive connection path: /org/freedesktop/NetworkManager/ActiveConnection/1                                                    [  OK  ]

3) Test IP network configuration

The IP address of the network is viewed through the ifconfig command, and the following information shows that the IP address of the Eth0 wireless card is 192.168.100.147, which matches the IP address we configured above, indicating that the IP address configuration was successful.

[[email protected] ~]$ ifconfigeth0  Link encap:Ethernet  HWaddr 06:8D:30:00:00:27  inet addr:192.168.100.147  Bcast:192.168.100.255  Mask:255.255.255.0  inet6 addr: fe80::48d:30ff:fe00:27/64 Scope:Link  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1  RX packets:59099169 errors:0 dropped:0 overruns:0 frame:0  TX packets:30049168 errors:0 dropped:0 overruns:0 carrier:0  collisions:0 txqueuelen:1000  RX bytes:12477388443 (11.6 GiB)  TX bytes:8811418526 (8.2 GiB)loLink encap:Local Loopback  inet addr:127.0.0.1  Mask:255.0.0.0  inet6 addr: ::1/128 Scope:Host  UP LOOPBACK RUNNING  MTU:16436  Metric:1  RX packets:2266013 errors:0 dropped:0 overruns:0 frame:0  TX packets:2266013 errors:0 dropped:0 overruns:0 carrier:0  collisions:0 txqueuelen:0  RX bytes:666482169 (635.6 MiB)  TX bytes:666482169 (635.6 MiB)

4) Modify host hostname

[[email protected] hadoop]$ sudo vi /etc/sysconfig/networkNETWORKING=yesNETWORKING_IPV6=noHOSTNAME=Master[[email protected] hadoop]$ sudo vi /etc/hosts127.0.0.1               localhost.localdomain::1                     hdirect30 hdirect30192.168.100.201         K-Master

5) Reboot the host to make the host name effective

[[email protected] hadoop]$ sudo reboot
4. Turn off the firewall

Shut down the firewall of all the machines in the cluster before starting up, or the Datanode will turn off automatically.

1) View firewall status

[[email protected] ~]$ sudo service iptables statusiptables: Firewall is not running.

2) Turn off the firewall

[[email protected] hadoop]$ sudo service iptables stopiptables: Setting chains to policy ACCEPT: filter   [  OK  ]iptables: Flushing firewall rules:                  [  OK  ]iptables: Unloading modules:                        [  OK  ]

3) Permanently shut down the firewall

[[email protected] hadoop]$ sudo chkconfig iptables off

4) Turn off SELinux

[[email protected] hadoop]$ sudo vi /etc/selinux/configSELINUX=disabled
5. Installing the JDK tool

1) Unzip

[[email protected] ~]$ scp [email protected]:/home/hadoop/jdk-7u65-linux-x64.rpm .[[email protected] ~]$ sudo rpm -ivh jdk-7u65-linux-x64.rpm

2) Edit the "/etc/profile" file and add the JAVA "Java_home", "CLASSPATH" and "PATH" content later.

[[email protected] ~]$ sudo vim /etc/profile#JAVAexport JAVA_HOME=/usr/java/jdk1.7.0_65export JRE_HOME=$JAVA_HOME/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin#HADOOPexport HADOOP_HOME=/usr/hadoop-1.2.1export PATH=$PATH:$HADOOP_HOME/binexport HADOOP_HOME_WARN_SUPPRESS=1

3) Make the configuration file effective

[[email protected] ~]$ source /etc/profile

for more details, please read on to the next page. Highlights : http://www.linuxidc.com/Linux/2015-03/114669p2.htm

--------------------------------------Split Line--------------------------------------

Ubuntu14.04 Hadoop2.4.1 stand-alone/pseudo-distributed installation configuration tutorial http://www.linuxidc.com/Linux/2015-02/113487.htm

CentOS Installation and configuration Hadoop2.2.0 http://www.linuxidc.com/Linux/2014-01/94685.htm

Build a Hadoop environment on Ubuntu 13.04 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +hadoop 1.2.1 version cluster configuration http://www.linuxidc.com/Linux/2013-09/90600.htm

Build a Hadoop environment on Ubuntu (standalone mode + pseudo distribution mode) http://www.linuxidc.com/Linux/2013-01/77681.htm

Configuration of the Hadoop environment under Ubuntu http://www.linuxidc.com/Linux/2012-11/74539.htm

A single version of the Hadoop Environment Graphics tutorial detailed http://www.linuxidc.com/Linux/2012-02/53927.htm

Build a Hadoop environment (build with virtual machine Virtual two Ubuntu system in WINODWS Environment) http://www.linuxidc.com/Linux/2011-12/48894.htm

Original link: http://www.linuxidc.com/Linux/2015-03/114669.htm

Hadoop's server Infrastructure setup

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.