Hadoop1.2.1偽分布模式安裝指南

來源:互聯網
上載者:User

標籤:hadoop   安裝   



一、前置條件

1、作業系統準備(1)Linux可以用作開發平台及產品平台。(2)win32隻可用作開發平台,且需要cygwin的支援。2、安裝jdk 1.6或以上3、安裝ssh,並配置免密碼登入。(root使用者)$ ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa 
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys4、若為初次安裝,為避免許可權問題,建議使用root使用者。
二、基本準備1、下載hadoop1.2.1並解壓
  [[email protected] jediael]$wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz [[email protected] jediael]$ tar -zxvf hadoop-1.2.1-bin.tar.gz
選擇國內鏡像,速度較快。
2、修改conf/hadoop-env.sh,添加JAVA_HOME變數(1)增加JAVA_HOME
[[email protected] hadoop-1.2.1]$ vi conf/hadoop-env.sh export JAVA_HOME=/usr/java/jdk1.7.0_51
(2)執行hadoop命令
[[email protected] hadoop-1.2.1]$ bin/hadoop Usage: hadoop [--config confdir] COMMAND where COMMAND is one of: namenode -format format the DFS filesystem secondarynamenode run the DFS secondary namenode namenode run the DFS namenode datanode run a DFS datanode dfsadmin run a DFS admin client mradmin run a Map-Reduce admin client fsck run a DFS filesystem checking utility fs run a generic filesystem user client
以上輸出表明安裝正常。
三、配置偽分布模式1、配置core-site.xml,增加以下屬性
<configuration>     <property>         <name>fs.default.name</name>         <value>hdfs://localhost:9000</value>     </property></configuration>
2、配置hdfs-site.xml,增加以下屬性
<configuration>     <property>         <name>dfs.replication</name>         <value>1</value>     </property></configuration>

3、配置mapred-site.xml,增加以下屬性
<configuration>     <property>         <name>mapred.job.tracker</name>         <value>localhost:9001</value>     </property></configuration>
四、啟動hadoop1、格式化hdfs
[[email protected] hadoop-1.2.1]$ bin/hadoop namenode -format 14/08/16 23:50:02 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = jediael/10.171.29.191 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 1.2.1 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013 STARTUP_MSG: java = 1.7.0_51 ************************************************************/ 14/08/16 23:50:02 INFO util.GSet: Computing capacity for map BlocksMap 14/08/16 23:50:02 INFO util.GSet: VM type = 64-bit 14/08/16 23:50:02 INFO util.GSet: 2.0% max memory = 1013645312 14/08/16 23:50:02 INFO util.GSet: capacity = 2^21 = 2097152 entries 14/08/16 23:50:02 INFO util.GSet: recommended=2097152, actual=2097152 14/08/16 23:50:02 INFO namenode.FSNamesystem: fsOwner=jediael 14/08/16 23:50:02 INFO namenode.FSNamesystem: supergroup=supergroup 14/08/16 23:50:02 INFO namenode.FSNamesystem: isPermissionEnabled=true 14/08/16 23:50:02 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100 14/08/16 23:50:02 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 14/08/16 23:50:02 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0 14/08/16 23:50:02 INFO namenode.NameNode: Caching file names occuring more than 10 times 14/08/16 23:50:03 INFO common.Storage: Image file /tmp/hadoop-jediael/dfs/name/current/fsimage of size 113 bytes saved in 0 seconds. 14/08/16 23:50:03 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-jediael/dfs/name/current/edits 14/08/16 23:50:03 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-jediael/dfs/name/current/edits 14/08/16 23:50:03 INFO common.Storage: Storage directory /tmp/hadoop-jediael/dfs/name has been successfully formatted. 14/08/16 23:50:03 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at jediael/10.171.29.191 ************************************************************/

2、啟動hadoop
[[email protected] hadoop-1.2.1]# bin/start-all.sh starting namenode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-namenode-jediael.out localhost: starting datanode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-jediael.out localhost: starting secondarynamenode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-secondarynamenode-jediael.out starting jobtracker, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-jobtracker-jediael.out localhost: starting tasktracker, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-jediael.out

為了不需要重複輸入密碼,使用root使用者進行啟動,並預先配置ssh免密碼登入。如何設定普通使用者的ssh免密碼使用者登入?同樣方法不成功,sudo也不成功。待解決。
預設情況下,日誌將被輸出至{HADOOP_HOME}/logs,除非修改了${HADOOP_LOG_DIR}。
3、訪問以下2個頁面,驗證是否已經安裝成功
  • NameNode - http://localhost:50070/
  • JobTracker - http://localhost:50030/
4、使用jps查看各個進程的運行情況
[[email protected] hadoop-1.2.0]# jps 3148 JobTracker 3280 TaskTracker 3052 SecondaryNameNode 2920 DataNode 2801 NameNode 3442 Jps

五、使用一個簡單的hadoop程式驗證環境參考 http://blog.csdn.net/jediael_lu/article/details/37596469

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.