spark1.6.1 cluster Deployment (standalone)

Source: Internet
Author: User
Keywords Command generation key install configuration Jdk standalone
Tags cluster deployment configuration configure content copy directory environment environment variables
1. Node Preparation 192.168.137.129 spslave2 192.168.137.130 spmaster 192.168.137.131 spslave1 2. Modify Host name

3. Configure password-free login first to the user's home directory (CD ~), ls view the file, one of which is ". SSH", which is the file price that holds the key. The key we generate will be placed in this folder later. Now execute command generation key: Ssh-keygen-t rsa-p "" (using RSA encryption method to generate the key) returns, prompts three times to enter the information, we directly return. Go to the folder CD. SSH (you can perform LS view files after entering a folder) appends the generated public key id_rsa.pub content to Authorized_keys, executing the command:


cat id_rsa.pub >> authorized_keys Copy Each node's Authorized_keys contents to each other's file, and then ssh to each other to connect to each other


4. Installation configuration JDK

All nodes are installed JDK1.7, after the installation is complete, set environment variables:

export Java_home=/usr/java/jdk1.7.0_67-cloudera/export path= $JAVA _home/bin: $PATH export classpath=.: $JAVA _home /lib/dt.jar: $JAVA _home/lib/tools.jar 5. Installation Configuration Scala

All nodes Install scala2.10.6 version
After the installation is complete, configure the environment variables:

Export Scala_home=/usr/scala-2.10.6/export path= $PATH: $SCALA _home/bin: $SCALA _home/bin 6. Installation configuration Spark 6.1. Download spark1.6.1

6.2. Configure SPARK environment variable export spark_home=/usr/spark-1.6.0-bin-hadoop2.6 export path= $PATH: $SPARK _home/bin: $SPARK _home/bin 6.3. Configure $spark_home/conf/slaves

First copy the slaves.template, rename it to Slave2, and compile the Slave2 content:

6.4. Configure $spark_home/conf/spark-evn.sh

The same will be spark-env.sh.template copy, named spark-evn.sh, append content:

Export Java_home=/usr/java/jdk1.7.0_67-cloudera/export spark_master_ip=spmaster export SPARK_WORKER_MEMORY=1G Export SCALA_HOME=/USR/SCALA-2.10.6/7. Start Spark mode start Master./sbin/start-master.sh start brought./sbin/start-slave.sh master-spark-url:spark://spmaster:7077 Way one./sbin/start-all.sh

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.