首先,要說明的一點的是,我不想重複發明輪子。如果想要搭建Hadoop環境,網上有很多詳細的步驟和命令代碼,我不想再重複記錄。
其次,我要說的是我也是新手,對於Hadoop也不是很熟悉。但是就是想實際搭建好環境,看看他的廬山真面目,還好,還好,最好看到了。當運行wordcount詞頻統計的時候,實在是感歎hadoop已經把分布式做的如此之好,即使沒有分布式相關經驗的人,也只需要做一些配置即可運行分布式叢集環境。
好了,言歸真傳。
在搭建Hadoop環境中你要知道的一些事兒:
1.hadoop運行於Linux系統之上,你要安裝Linux作業系統
2.你需要搭建一個運行hadoop的叢集,例如區域網路內能互相訪問的linux系統
3.為了實現叢集之間的相互訪問,你需要做到ssh無密鑰登入
4.hadoop的運行在JVM上的,也就是說你需要安裝Java的JDK,並配置好JAVA_HOME
5.hadoop的各個組件是通過XML來配置的。在官網上下載好hadoop之後解壓縮,修改/etc/hadoop目錄中相應的設定檔
工欲善其事,必先利其器。這裡也要說一下,在搭建hadoop環境中使用到的相關軟體和工具:
1.VirtualBox——畢竟要類比幾台linux,條件有限,就在VirtualBox中建立幾台虛擬機器樓
2.CentOS——下載的CentOS7的iso鏡像,載入到VirtualBox中,安裝運行
3.secureCRT——可以SSH遠端存取linux的軟體
4.WinSCP——實現windows和Linux的通訊
5.JDK for linux——Oracle官網上下載,解壓縮之後配置一下即可
6.hadoop2.7.1——可在Apache官網上下載
好了,下面分三個步驟來講解
Linux環境準備
配置IP
為了實現本機和虛擬機器以及虛擬機器和虛擬機器之間的通訊,VirtualBox中設定CentOS的串連模式為Host-Only模式,並且手動設定IP,注意虛擬機器的網關和本機中host-only network 的IP地址相同。配置IP完成後還要重啟網路服務以使得配置有效。這裡搭建了三台Linux,如下圖所示
配置主機名稱字
對於192.168.56.101設定主機名稱字hadoop01。並在hosts檔案中配置叢集的IP和主機名稱。其餘兩個主機的操作與此類似
[root@hadoop01 ~]# cat /etc/sysconfig/network # Created by anaconda NETWORKING = yes HOSTNAME = hadoop01 [root@hadoop01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.56.101 hadoop01 192.168.56.102 hadoop02 192.168.56.103 hadoop03
永久關閉防火牆
service iptables stop(1.下次重啟機器後,防火牆又會啟動,故需要永久關閉防火牆的命令;2由於用的是CentOS 7,關閉防火牆的命令如下)
systemctl stop firewalld.service #停止firewallsystemctl disable firewalld.service #禁止firewall開機啟動
關閉SeLinux防護系統
改為disabled 。reboot重啟機器,使配置生效
[root@hadoop02 ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced # permissive - SELinux prints warnings instead of enforcing # disabled - No SELinux policy is loaded SELINUX=disabled # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy Only selected processes are protected # mls - Multi Level Security protection SELINUXTYPE=targeted
叢集SSH免密碼登入
首先設定ssh密鑰
拷貝ssh密鑰到三台機器
ssh-copy-id 192.168.56.101 <pre name="code" class="plain">ssh-copy-id 192.168.56.102
ssh-copy-id 192.168.56.103
這樣如果hadoop01的機器想要登入hadoop02,直接輸入ssh hadoop02
<pre name="code" class="plain">ssh hadoop02
配置JDK
這裡在/home忠誠建立三個檔案夾中
tools——存放工具包
softwares——存放軟體
data——存放資料
通過WinSCP將下載好的Linux JDK上傳到hadoop01的/home/tools中
解壓縮JDK到softwares中
<pre name="code" class="plain">tar -zxf jdk-7u76-linux-x64.tar.gz -C /home/softwares
可見JDK的家目錄在/home/softwares/JDK.x.x.x,將該目錄拷貝粘貼到/etc/profile檔案中,並且在檔案中設定JAVA_HOME
export JAVA_HOME=/home/softwares/jdk0_111 export PATH=$PATH:$JAVA_HOME/bin
儲存修改,執行source /etc/profile使配置生效
查看Java jdk是否安裝成功:
可以將當前節點中設定的檔案拷貝到其他節點
scp -r /home/* root@192.168.56.10X:/home
Hadoop叢集安裝
叢集的規劃如下:
101節點作為HDFS的NameNode ,其餘作為DataNode;102作為YARN的ResourceManager,其餘作為NodeManager。103作為SecondaryNameNode。分別在101和102節點啟動JobHistoryServer和WebAppProxyServer
下載hadoop-2.7.3
並將其放在/home/softwares檔案夾中。由於hadoop需要JDK的安裝環境,所以首先配置/etc/hadoop/hadoop-env.sh的JAVA_HOME
(PS:感覺我用的jdk版本過高了)
接下來依次修改hadoop相應組件對應的XML
修改core-site.xml :
指定namenode地址
修改hadoop的緩衝目錄
hadoop的記憶體回收機制
<configuration> <property> <name>fsdefaultFS</name> <value>hdfs://101:8020</value> </property> <property> <name>hadooptmpdir</name> <value>/home/softwares/hadoop-3/data/tmp</value> </property> <property> <name>fstrashinterval</name> <value>10080</value> </property> </configuration>
hdfs-site.xml
設定備份數目
關閉許可權
設定http提供者
設定secondary namenode 的IP地址
<configuration> <property> <name>dfsreplication</name> <value>3</value> </property> <property> <name>dfspermissionsenabled</name> <value>false</value> </property> <property> <name>dfsnamenodehttp-address</name> <value>101:50070</value> </property> <property> <name>dfsnamenodesecondaryhttp-address</name> <value>103:50090</value> </property> </configuration>
修改mapred-site.xml.template名字為mapred-site.xml
指定mapreduce的架構為yarn,通過yarn來調度
指定jobhitory
指定jobhitory的web連接埠
開啟uber模式——這是針對mapreduce的最佳化
<configuration> <property> <name>mapreduceframeworkname</name> <value>yarn</value> </property> <property> <name>mapreducejobhistoryaddress</name> <value>101:10020</value> </property> <property> <name>mapreducejobhistorywebappaddress</name> <value>101:19888</value> </property> <property> <name>mapreducejobubertaskenable</name> <value>true</value> </property> </configuration>
修改yarn-site.xml
指定mapreduce為shuffle
指定102節點為resourcemanager
指定102節點的安全代理
開啟yarn的日誌
指定yarn日誌刪除時間
指定nodemanager的記憶體:8G
指定nodemanager的CPU:8核
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarnnodemanageraux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarnresourcemanagerhostname</name> <value>102</value> </property> <property> <name>yarnweb-proxyaddress</name> <value>102:8888</value> </property> <property> <name>yarnlog-aggregation-enable</name> <value>true</value> </property> <property> <name>yarnlog-aggregationretain-seconds</name> <value>604800</value> </property> <property> <name>yarnnodemanagerresourcememory-mb</name> <value>8192</value> </property> <property> <name>yarnnodemanagerresourcecpu-vcores</name> <value>8</value> </property> </configuration>
配置slaves
指定計算節點,即運行datanode和nodemanager的節點
192.168.56.101
192.168.56.102
192.168.56.103
先在namenode節點格式化,即101節點上執行:
進入到hadoop主目錄: cd /home/softwares/hadoop-3
執行bin目錄下的hadoop指令碼: bin/hadoop namenode -format
出現successful format才算是執行成功(PS,這裡是盜用別人的圖,不要介意哈)
以上配置完成後,將其拷貝到其他的機器
Hadoop環境測試
進入hadoop主目錄下執行相應的指令檔
jps命令——java Virtual Machine Process Status,顯示啟動並執行java進程
在namenode節點101機器上開啟hdfs
[root@hadoop01 hadoop-3]# sbin/start-dfssh Java HotSpot(TM) Client VM warning: You have loaded library /home/softwares/hadoop-3/lib/native/libhadoopso which might have disabled stack guard The VM will try to fix the stack guard now It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack' 16/11/07 16:49:19 WARN utilNativeCodeLoader: Unable to load native-hadoop library for your platform using builtin-java classes where applicable Starting namenodes on [hadoop01] hadoop01: starting namenode, logging to /home/softwares/hadoop-3/logs/hadoop-root-namenode-hadoopout 102: starting datanode, logging to /home/softwares/hadoop-3/logs/hadoop-root-datanode-hadoopout 103: starting datanode, logging to /home/softwares/hadoop-3/logs/hadoop-root-datanode-hadoopout 101: starting datanode, logging to /home/softwares/hadoop-3/logs/hadoop-root-datanode-hadoopout Starting secondary namenodes [hadoop03] hadoop03: starting secondarynamenode, logging to /home/softwares/hadoop-3/logs/hadoop-root-secondarynamenode-hadoopout
此時101節點上執行jps,可以看到namenode和datanode已經啟動
[root@hadoop01 hadoop-3]# jps 7826 Jps 7270 DataNode 7052 NameNode
在102和103節點執行jps,則可以看到datanode已經啟動
[root@hadoop02 bin]# jps 4260 DataNode 4488 Jps [root@hadoop03 ~]# jps 6436 SecondaryNameNode 6750 Jps 6191 DataNode
啟動yarn
在102節點執行
[root@hadoop02 hadoop-3]# sbin/start-yarnsh starting yarn daemons starting resourcemanager, logging to /home/softwares/hadoop-3/logs/yarn-root-resourcemanager-hadoopout 101: starting nodemanager, logging to /home/softwares/hadoop-3/logs/yarn-root-nodemanager-hadoopout 103: starting nodemanager, logging to /home/softwares/hadoop-3/logs/yarn-root-nodemanager-hadoopout 102: starting nodemanager, logging to /home/softwares/hadoop-3/logs/yarn-root-nodemanager-hadoopout
jps查看各節點:
[root@hadoop02 hadoop-3]# jps 4641 ResourceManager 4260 DataNode 4765 NodeManager 5165 Jps [root@hadoop01 hadoop-3]# jps 7270 DataNode 8375 Jps 7976 NodeManager 7052 NameNode [root@hadoop03 ~]# jps 6915 NodeManager 6436 SecondaryNameNode 7287 Jps 6191 DataNode
分別啟動相應節點的jobhistory和防護進程
[root@hadoop01 hadoop-3]# sbin/mr-jobhistory-daemonsh start historyserver starting historyserver, logging to /home/softwares/hadoop-3/logs/mapred-root-historyserver-hadoopout [root@hadoop01 hadoop-3]# jps 8624 Jps 7270 DataNode 7976 NodeManager 8553 JobHistoryServer 7052 NameNode [root@hadoop02 hadoop-3]# sbin/yarn-daemonsh start proxyserver starting proxyserver, logging to /home/softwares/hadoop-3/logs/yarn-root-proxyserver-hadoopout [root@hadoop02 hadoop-3]# jps 4641 ResourceManager 4260 DataNode 5367 WebAppProxyServer 5402 Jps 4765 NodeManager
在hadoop01節點,即101節點上,通過瀏覽器查看節點狀況
hdfs上傳檔案
[root@hadoop01 hadoop-3]# bin/hdfs dfs -put /etc/profile /profile
運行wordcount程式
[root@hadoop01 hadoop-3]# bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-jar wordcount /profile /fll_out Java HotSpot(TM) Client VM warning: You have loaded library /home/softwares/hadoop-3/lib/native/libhadoopso which might have disabled stack guard The VM will try to fix the stack guard now It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack' 16/11/07 17:17:10 WARN utilNativeCodeLoader: Unable to load native-hadoop library for your platform using builtin-java classes where applicable 16/11/07 17:17:12 INFO clientRMProxy: Connecting to ResourceManager at /102:8032 16/11/07 17:17:18 INFO inputFileInputFormat: Total input paths to process : 1 16/11/07 17:17:19 INFO mapreduceJobSubmitter: number of splits:1 16/11/07 17:17:19 INFO mapreduceJobSubmitter: Submitting tokens for job: job_1478509135878_0001 16/11/07 17:17:20 INFO implYarnClientImpl: Submitted application application_1478509135878_0001 16/11/07 17:17:20 INFO mapreduceJob: The url to track the job: http://102:8888/proxy/application_1478509135878_0001/ 16/11/07 17:17:20 INFO mapreduceJob: Running job: job_1478509135878_0001 16/11/07 17:18:34 INFO mapreduceJob: Job job_1478509135878_0001 running in uber mode : true 16/11/07 17:18:35 INFO mapreduceJob: map 0% reduce 0% 16/11/07 17:18:43 INFO mapreduceJob: map 100% reduce 0% 16/11/07 17:18:50 INFO mapreduceJob: map 100% reduce 100% 16/11/07 17:18:55 INFO mapreduceJob: Job job_1478509135878_0001 completed successfully 16/11/07 17:18:59 INFO mapreduceJob: Counters: 52 File System Counters FILE: Number of bytes read=4264 FILE: Number of bytes written=6412 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=3940 HDFS: Number of bytes written=261673 HDFS: Number of read operations=35 HDFS: Number of large read operations=0 HDFS: Number of write operations=8 Job Counters Launched map tasks=1 Launched reduce tasks=1 Other local map tasks=1 Total time spent by all maps in occupied slots (ms)=8246 Total time spent by all reduces in occupied slots (ms)=7538 TOTAL_LAUNCHED_UBERTASKS=2 NUM_UBER_SUBMAPS=1 NUM_UBER_SUBREDUCES=1 Total time spent by all map tasks (ms)=8246 Total time spent by all reduce tasks (ms)=7538 Total vcore-milliseconds taken by all map tasks=8246 Total vcore-milliseconds taken by all reduce tasks=7538 Total megabyte-milliseconds taken by all map tasks=8443904 Total megabyte-milliseconds taken by all reduce tasks=7718912 Map-Reduce Framework Map input records=78 Map output records=256 Map output bytes=2605 Map output materialized bytes=2116 Input split bytes=99 Combine input records=256 Combine output records=156 Reduce input groups=156 Reduce shuffle bytes=2116 Reduce input records=156 Reduce output records=156 Spilled Records=312 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=870 CPU time spent (ms)=1970 Physical memory (bytes) snapshot=243326976 Virtual memory (bytes) snapshot=2666557440 Total committed heap usage (bytes)=256876544 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=1829 File Output Format Counters Bytes Written=1487
瀏覽器中通過YARN查看運行狀態
查看最後的詞頻統計結果
瀏覽器中查看hdfs的檔案系統
[root@hadoop01 hadoop-3]# bin/hdfs dfs -cat /fll_out/part-r-00000 Java HotSpot(TM) Client VM warning: You have loaded library /home/softwares/hadoop-3/lib/native/libhadoopso which might have disabled stack guard The VM will try to fix the stack guard now It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack' 16/11/07 17:29:17 WARN utilNativeCodeLoader: Unable to load native-hadoop library for your platform using builtin-java classes where applicable != 1 "$-" 1 "$2" 1 "$EUID" 2 "$HISTCONTROL" 1 "$i" 3 "${-#*i}" 1 "0" 1 ":${PATH}:" 1 "`id 2 "after" 1 "ignorespace" 1 # 13 $UID 1 && 1 () 1 *) 1 *:"$1":*) 1 -f 1 -gn`" 1 -gt 1 -r 1 -ru` 1 -u` 1 -un`" 2 -x 1 -z 1 2 /etc/bashrc 1 /etc/profile 1 /etc/profiled/ 1 /etc/profiled/*sh 1 /usr/bin/id 1 /usr/local/sbin 2 /usr/sbin 2 /usr/share/doc/setup-*/uidgid 1 002 1 022 1 199 1 200 1 2>/dev/null` 1 ; 3 ;; 1 = 4 >/dev/null 1 By 1 Current 1 EUID=`id 1 Functions 1 HISTCONTROL 1 HISTCONTROL=ignoreboth 1 HISTCONTROL=ignoredups 1 HISTSIZE 1 HISTSIZE=1000 1 HOSTNAME 1 HOSTNAME=`/usr/bin/hostname 1 It's 2 JAVA_HOME=/home/softwares/jdk0_111 1 LOGNAME 1 LOGNAME=$USER 1 MAIL 1 MAIL="/var/spool/mail/$USER" 1 NOT 1 PATH 1 PATH=$1:$PATH 1 PATH=$PATH:$1 1 PATH=$PATH:$JAVA_HOME/bin 1 Path 1 System 1 This 1 UID=`id 1 USER 1 USER="`id 1 You 1 [ 9 ] 3 ]; 6 a 2 after 2 aliases 1 and 2 are 1 as 1 better 1 case 1 change 1 changes 1 check 1 could 1 create 1 custom 1 customsh 1 default, 1 do 1 doing 1 done 1 else 5 environment 1 environment, 1 esac 1 export 5 fi 8 file 2 for 5 future 1 get 1 go 1 good 1 i 2 idea 1 if 8 in 6 is 1 it 1 know 1 ksh 1 login 2 make 1 manipulation 1 merging 1 much 1 need 1 pathmunge 6 prevent 1 programs, 1 reservation 1 reserved 1 script 1 set 1 sets 1 setup 1 shell 2 startup 1 system 1 the 1 then 8 this 2 threshold 1 to 5 uid/gids 1 uidgid 1 umask 3 unless 1 unset 2 updates 1 validity 1 want 1 we 1 what 1 wide 1 will 1 workaround 1 you 2 your 1 { 1 } 1
這就代表hadoop叢集正確
以上就是本文的全部內容,希望對大家的學習有所協助,也希望大家多多支援雲棲社區。