How to install Savanna

來源:互聯網
上載者:User

How to install Savanna 解壓軟體包中的savanna-all.tar.gz安裝tar -C / -xzf savanna-all.tar.gz安裝了下列軟體1、/openstack-horizon2、/etc/savanna/savanna.conf3、/usr/local/bin/savanna-api和savanna-db-manage4、/usr/local/lib/python2.7/dist-packages5、上傳鏡像到glance伺服器:glance image-create --name=vanilla-hadoop.image --disk-format=qcow2 --container-format=bare < ./savanna-0.1.2-hadoop.qcow2一、horizon的安裝和配置。1、將軟體包中openstack-horizon拷貝到目標機器的根目錄下 /2、cp openstack_dashboard/local/local_settings.py.example openstack_dashboard/local/local_settings.py3、修改local_settings.py的配置[skipped]OPENSTACK_HOST = "172.18.79.139" <------ KeyStone addressOPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOSTOPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"[skipped]4、在keystone伺服器上面註冊horizon的服務(如果openstack已經安裝成功,這個已經註冊過了,可以省略)。5、啟動horizon服務 python manage.py runserver 0.0.0.0:6666(連接埠號碼可以任意)6、在用戶端瀏覽器上輸入horizon所在的目標主機ip加連接埠進行訪問。 可能存在的問題1、缺少lessc的CSS樣式回答:解決方案按照如下的次序安裝軟體包中savanna依賴的包.libc-ares2 amd64libv8-3.8.9.20 amd64                                                     libev4 amd64                                                               nodejs amd64                                                     node-less 設定樣式連結,使用系統/usr/bin/lessc樣式來展示horizon的頁面ln -s /usr/bin/lessc <path_to_horizon>/bin/less/lessc 二、savanna的安裝和配置1、修改/etc/savanna/savanna.conf設定檔,可以修改連接埠prot和進行keystone認證的參數os_auth_host=10.0.0.100os_auth_port=35357os_admin_username=novaos_admin_password=service_passos_admin_tenant_name=service 2、在keystone上面註冊savanna服務keystone service-create --name savanna --type mapreduce --description "Savanna" keystone endpoint-create --service-id ed08fa240cbd40898eab09eb9d5c3d0c --internalurl "http://25.8.67.32:9000/v0.2/%(tenant_id)s" --publicurl "http://25.8.67.32:9000/v0.2/%(tenant_id)s" --adminurl "none" 上面service-id為第一步成說那話是的id地址,ip地址為horizon的ip地址,連接埠為savanna設定檔中指定的連接埠。 這個過程中可能需要修改keystone所在主機的/root/.bashrcexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=keystone_adminexport OS_AUTH_URL="http://25.8.67.32:5000/v2.0/"source /root/.bashrc 3、產生savanna使用的資料庫savanna‐db-manage ‐‐config‐file /etc/savanna/savanna.conf 後面參數自己嘗試,可以參見軟體包中savanna-db-manage 4、啟動savanna服務savanna‐api ‐‐config‐file /etc/savanna/savanna.conf 按照上述的步驟完成後可能存在著一下的問題,就是savanna的提供的服務由於版本號碼關係不能正確解析,由於savanna-all中的版本是1.0的版本的,由於1.0版本中有些服務還沒有開發完,因此在建立node-temple的時候有問題(cluster-templete可以成功建立),因此我們將將savanna的版本降到穩定版上面,主要替換savanna的api檔案以及savanna原始檔案中被1.0版本所覆蓋的檔案。a、將軟體包中savanna檔案拷貝到目標機器b、使用setup.py來安裝c、主要的工作主要包括:替換/usr/local/bin下面關於savanna的運行檔案;替換savanna.conf檔案這個可能需要注意進行重新進行配置;主要替換/usr/local/lib/python2.7/dist-packages檔案中的內容,主要看api檔案中的都換成0.2的版本。sudo savanna--manage --config-file /etc/savanna/savanna.conf reset-db --with-gen-templatessudo savanna-api --config-file /etc/savanna/savanna.conf 5. Questions and Answers. Q1: can not create DataNode Template through the websiteAnswer: go to /openstack-horizon/openstack_dashboard/api/savanna.py and modify line 139 and change "task_tracker_opts" to " job_tracker_opts"if "tt" in str(node_type).lower():        #template_data["task_tracker"] = task_tracker_opts         template_data["task_tracker"] = job_tracker_optsif "dn" in str(node_type).lower():        #template_data["data_node"] = data_node_opts        template_data["data_node"] = name_node_opts  Q2: when I start the hadoop cluster and find that the namenode  can no get the correct ip addresses of slaversAnswer:go to /usr/local/lib/python2.7/dist-packages/savanna-0.1.2.a3.gda0f4b7-py2.7.egg/savanna/service/cluster_ops.py and modify the function _check_if_up(nova, node):    if len(nets) == 0:            print "VM's networking is not configured yet"            eventlet.sleep(10)            return    This could be improved by another methods. Q3: when starting hadoop, through the logging file we can see hadoop can not find the host "ubuntu"Answer: go to cluster_ops.py and modify the following function. adding the mapping between hostname and ip address.def _generate_hosts(clmap):    hosts = "127.0.0.1 localhost\n"    hosts += "127.0.0.1 ubuntu\n" Q4: through the log, we found the the hadoop can not create the following files.Answer: go to cluster_ops.py and add the following code in fucntion _setup_node(node,clmap) before ret=_open_channel_and_execute()     sftp.mkdir('/var/run/hadoop')        sftp.chmod('/var/run/hadoop',0777) Q5: how to configure the ip to address http://%s:50030 to see the details of jobtrackerAnswers: go to node1 and configure the ip routers    ipvsadm -A -t node1:port -s rr    ipvsadm -a -t node1:port -r privateIP:50030 -m -w 1 Q6: hadoop has no privilige to create /tmp/savanna-hadoop-start-all.logAnswers: go to cluster.py and modify the function _start_cluster()    change the direction of the log file. Q7: Maybe there is somethings wrong in function _setup_ssh_connection(host,ssh) in cluster_opts.py, you should specify the prot 22 in ssh.connection(host,port,username,passowrd) 

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.