Hbase + Hadoop installation and deployment

Source: Internet
Author: User
Tags hadoop fs
VMware has installed Multiple RedHatLinux operating systems, excerpted a lot of online materials, and installed them in order? 1. Create groupaddbigdatauseradd-gbigdatahadooppasswdhadoop? 2. Create JDKvietcprofile? ExportJAVA_HOMEusrlibjava-1.7.0_07exportCLASSPATH.

VMware has installed Multiple RedHat Linux operating systems, excerpted a lot of online materials, and installed them in order? 1. Create groupadd bigdata useradd-g bigdata hadoop passwd hadoop? 2. Create JDK vi/etc/profile? Export JAVA_HOME =/usr/lib/java-1.7.0_07 export CLASSPATH =.

VMware has installed Multiple RedHat Linux operating systems, excerpted a lot of online materials, and installed them in order.

?

1. Create a user

Groupadd bigdata

Useradd-g bigdata hadoop

Passwd hadoop

?

2. Build JDK

Vi/etc/profile

?

Export JAVA_HOME =/usr/lib/java-1.7.0_07

Export CLASSPATH =.

Export HADOOP_HOME =/home/hadoop

Export HBASE_HOME =/home/hadoop/hbase?

Export HADOOP_MAPARED_HOME =$ {HADOOP_HOME}

Export HADOOP_COMMON_HOME =$ {HADOOP_HOME}

Export HADOOP_HDFS_HOME =$ {HADOOP_HOME}

Export YARN_HOME =$ {HADOOP_HOME}

Export HADOOP_CONF_DIR =$ {HADOOP_HOME}/etc/hadoop

Export HDFS_CONF_DIR =$ {HADOOP_HOME}/etc/hadoop

Export YARN_CONF_DIR =$ {HADOOP_HOME}/etc/hadoop

Export HBASE_CONF_DIR =$ {HBASE_HOME}/conf

Export ZK_HOME =/home/hadoop/zookeeper

Export PATH = $ JAVA_HOME/bin: $ HADOOP_HOME/bin: $ HBASE_HOME/bin: $ HADOOP_HOME/sbin: $ ZK_HOME/bin: $ PATH

?

?

?

?

?

Source/etc/profile

Chmod 777-R/usr/lib/java-1.7.0_07

?

?

3. Modify hosts

Vi/etc/hosts

Join

172.16.254.215? Master

172.16.254.216? Salve1

172.16.254.217? Salve2

172.16.254.218? Salve3

?

3. ssh password-free

215 servers

Su-root

Vi/etc/ssh/sshd_config

Make sure that the following content is included:

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile? ? ?. Ssh/authorized_keys

Restart sshd

Service sshd restart

?

Su-hadoop

Ssh-keygen-t rsa

Cd/home/hadoop/. ssh

Cat id_rsa.pub> authorized_keys

Chmod 600 authorized_keys

?

In 217? 218? 216 respectively?

Mkdir/home/hadoop/. ssh

Chmod 700/home/hadoop/. ssh

?

Run

Scp id_rsa.pub hadoop @ salve1:/home/hadoop/. ssh/

Scp id_rsa.pub hadoop @ salve2:/home/hadoop/. ssh/

Scp id_rsa.pub hadoop @ salve3:/home/hadoop/. ssh/

?

In 217? 218? 216 respectively?

Cat/home/hadoop/. ssh/id_rsa.pub>/home/hadoop/. ssh/authorized_keys?

Chmod 600/home/hadoop/. ssh // authorized_keys

?

?

4. Create hadoop, hbase, and zookeeper

Su-hadoop

Mkdir/home/hadoop

Mkdir/home/hadoop/hbase

Mkdir/home/hadoop/zookeeper

?

Cp-r/home/hadoop/soft/hadoop-2.0.1-alpha/*/home/hadoop/

Cp-r/home/hadoop/soft/hbase-0.95.0-hadoop2/*/home/hadoop/hbase/

Cp-r/home/hadoop/soft/zookeeper-3.4.5/*/home/hadoop/zookeeper/

?

?

1) hadoop Configuration

?

Is vi/home/hadoop/etc/hadoop/hadoop-env.sh?

Modify?

Export JAVA_HOME =/usr/lib/java-1.7.0_07

Export HBASE_MANAGES_ZK = true

?

?

Vi/home/hadoop/etc/hadoop/core-site.xml.

Join

Hadoop. tmp. dir

/Home/hadoop/tmp

A base for other temporary directories.

Fs. default. name

Hdfs: // 172.16.254.215: 9000

Hadoop. proxyuser. root. hosts

172.16.254.215

Hadoop. proxyuser. root. groups

*

?

Vi/home/hadoop/etc/hadoop/slaves?

Join (do not use master for salve)

Salve1

Salve2

Salve3

?

Vi/home/hadoop/etc/hadoop/hdfs-site.xml.

Join

Dfs. replication

3

?

Dfs. namenode. name. dir

File:/home/hadoop/hdfs/name

True

?

Dfs. federation. nameservice. id

Ns1

?

Dfs. namenode. backup. address. ns1

172.16.254.215: 50100

?

Dfs. namenode. backup. http-address.ns1

172.16.254.215: 50105

?

Dfs. federation. nameservices

Ns1

?

Dfs. namenode. rpc-address.ns1

172.16.254.215: 9000

Dfs. namenode. rpc-address.ns2

172.16.254.215: 9000

?

Dfs. namenode. http-address.ns1

172.16.254.215: 23001

?

Dfs. namenode. http-address.ns2

172.16.254.215: 13001

?

Dfs. dataname. data. dir

File:/home/hadoop/hdfs/data

True

?

Dfs. namenode. secondary. http-address.ns1

172.16.254.215: 23002

?

Dfs. namenode. secondary. http-address.ns2

172.16.254.215: 23002

?

Dfs. namenode. secondary. http-address.ns1

172.16.254.215: 23003

?

Dfs. namenode. secondary. http-address.ns2

172.16.254.215: 23003

?

?

Vi/home/hadoop/etc/hadoop/yarn-site.xml.

Join

Yarn. resourcemanager. address

172.16.254.215: 18040

?

Yarn. resourcemanager. schedager. address

172.16.254.215: 18030

?

Yarn. resourcemanager. webapp. address

172.16.254.215: 18088

?

Yarn. resourcemanager. resource-tracker.address

172.16.254.215: 18025

?

Yarn. resourcemanager. admin. address

172.16.254.215: 18141

?

Yarn. nodemanager. aux-services

Mapreduce. shuffle

?

2) hbase Configuration

?

Vi/home/hadoop/hbase/conf/hbase-site.xml

Join

?

Dfs. support. append ?

True ?

?

?

Hbase. rootdir ?

Hdfs: /// 172.16.254.215: 9000/hbase ?

?

?

Hbase. cluster. distributed ?

True ?

?

?

Hbase. config. read. zookeeper. config ?

True

?

Hbase. master ?

Master ?

?

?

Hbase. zookeeper. quorum ?

Salve1, salve2, salve3 ?

?

Zookeeper. session. timeout

60000

Hbase. zookeeper. property. clientPort

2181

Hbase. tmp. dir

/Home/hadoop/hbase/tmp

Temporary directory on the local filesystem.

Hbase. client. keyvalue. maxsize

10485760

?

Vi/home/hadoop/hbase/conf/regionservers

Join

Salve1

Salve2

Salve3

?

Vi/home/hadoop/hbase/conf/hbase-env.sh

Modify

Export JAVA_HOME =/usr/lib/java-1.7.0_07

Export HBASE_MANAGES_ZK = false

?

?

?

3) zookeeper Configuration

?

Vi/home/hadoop/zookeeper/conf/zoo. cfg

Join

TickTime = 2000

InitLimit = 10

SyncLimit = 5

DataDir =/home/hadoop/zookeeper/data

ClientPort = 2181

Server.1 = salve1: 2888: 3888

Server.2 = salve2: 2888: 3888

Server.3 = salve3: 2888: 3888

?

Copy/home/hadoop/zookeeper/conf/zoo. cfg to/home/hadoop/hbase/

?

?

4) synchronize the master and salve

Scp-r/home/hadoop? Hadoop @ salve1:/home/hadoop?

Scp-r/home/hadoop/hbase? Hadoop @ salve1:/home/hadoop?

Scp-r/home/hadoop/zookeeper? Hadoop @ salve1:/home/hadoop

?

Scp-r/home/hadoop? Hadoop @ salve2:/home/hadoop?

Scp-r/home/hadoop/hbase? Hadoop @ salve2:/home/hadoop?

Scp-r/home/hadoop/zookeeper? Hadoop @ salve2:/home/hadoop

?

Scp-r/home/hadoop? Hadoop @ salve3:/home/hadoop?

Scp-r/home/hadoop/hbase? Hadoop @ salve3:/home/hadoop?

Scp-r/home/hadoop/zookeeper? Hadoop @ salve3:/home/hadoop

?

Set zookeeper of salve1 salve2 salve3

?

Echo "1">/home/hadoop/zookeeper/data/myid

Echo "2">/home/hadoop/zookeeper/data/myid

Echo "3">/home/hadoop/zookeeper/data/myid

?

?

?

5) test

Test hadoop

Hadoop namenode-format-clusterid clustername

?

Start-all.sh

Hadoop fs-ls hdfs: // 172.16.254.215: 9000 /?

Hadoop fs-mkdir hdfs: // 172.16.254.215: 9000/hbase?

// Hadoop fs-copyFromLocal./install. log hdfs: // 172.16.254.215: 9000/testfolder?

// Hadoop fs-ls hdfs: // 172.16.254.215: 9000/testfolder

// Hadoop fs-put/usr/hadoop/hadoop-2.0.1-alpha/*. txt hdfs: // 172.16.254.215: 9000/testfolder

// Cd/usr/hadoop/hadoop-2.0.1-alpha/share/hadoop/mapreduce

// Hadoop jar hadoop-mapreduce-examples-2.0.1-alpha.jar wordcount hdfs: // 172.16.254.215: 9000/testfolder hdfs: // 172.16.254.215: 9000/output

// Hadoop fs-ls hdfs: // 172.16.254.215: 9000/output

// Hadoop fs-cat? Hdfs: // 172.16.254.215: 9000/output/part-r-00000

?

Start zookeeper of salve1 salve2 salve3

ZkServer. sh start

?

Start start-hbase.sh

Go to hbase shell

Test hbase?

List

Create 'student ', 'name', 'address '?

Put 'student ', '1', 'name', 'Tom'

Get 'student ', '1'

?



Existing 0People comment, slam-> Here<-Participate in the discussion


ITeye recommendation
  • -Software talents free of language and low guarantee paid study in the United States! -



Original article address: Hbase + Hadoop installation and deployment. Thanks to the original author for sharing.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.