Installing the spark1.3.1 stand-alone environment

Source: Internet
Author: User


This article describes how to install the spark stand-alone environment for testing and development. The main sections are divided into the following 4 parts:
(1) Environment preparation
(2) Install Scala
(3) Installing spark
(4) Verification of installation

1. Environment Preparation
(1) Companion software version requirements: Spark runs on Java 6+ and Python 2.6+. For the Scala API, Spark 1.3.1 uses Scala 2.10. You'll need to use a compatible Scala version (2.10.x).
(2) Install Linux, JDK, Python, General Linux will be installed with the JDK and Python, but note that the JDK default is OPENJDK, it is recommended to reinstall the Oracle JDK.
(3) ip:10.171.29.191 Hostname:master


2. Install Scala
(1) Download Scala
wget http://downloads.typesafe.com/scala/2.10.5/scala-2.10.5.tgz

(2) Unzip the file
TAR-ZXVF scala-2.10.5.tgz

(3) Configuring environment variables
#vi/etc/profile
#SCALA VARIABLES START
Export scala_home=/home/jediael/setupfile/scala-2.10.5
Export path= $PATH: $SCALA _home/bin
#SCALA VARIABLES END

$ source/etc/profile
$ scala-version
Scala Code Runner version 2.10.5--Copyright 2002-2013, LAMP/EPFL

(4) Verifying Scala
$ scala
Welcome to Scala version 2.10.5 (Java HotSpot (TM) 64-bit Server VM, Java 1.7.0_51).
Type in expressions to has them evaluated.
Type:help for more information.

Scala> 9*9
Res0:int = 81

3. Install Spark
(1) Download spark
wget http://mirror.bit.edu.cn/apache/spark/spark-1.3.1/spark-1.3.1-bin-hadoop2.6.tgz

(2) Unzip spark
TAR-ZXVF http://mirror.bit.edu.cn/apache/spark/spark-1.3.1/spark-1.3.1-bin-hadoop2.6.tgz

(3) Configuring environment variables
#vi/etc/profile
#SPARK VARIABLES START
Export spark_home=/mnt/jediael/spark-1.3.1-bin-hadoop2.6
Export path= $PATH: $SPARK _home/bin
#SPARK VARIABLES END

$ source/etc/profile

(4) Configuring spark
$ pwd
/mnt/jediael/spark-1.3.1-bin-hadoop2.6/conf

$ mv Spark-env.sh.template spark-env.sh
$vi spark-env.sh
Export scala_home=/home/jediael/setupfile/scala-2.10.5
Export java_home=/usr/java/jdk1.7.0_51
Export spark_master_ip=10.171.29.191
Export spark_worker_memory=512m
Export master=spark://10.171.29.191:7070

$vi Slaves
Master

(5) Start spark
Pwd
/mnt/jediael/spark-1.3.1-bin-hadoop2.6/sbin
$./start-all.sh
Note that Hadoop also has a start-all.sh script, so you must go to the specific directory to execute the script

$ JPS
30302 Worker
30859 Jps
30172 Master

4, verify the installation situation
(1) running the sample
$ bin/run-example Org.apache.spark.examples.SparkPi

(2) View cluster environment
http://master:8080/

(3) Enter Spark-shell
$spark-shell

(4) View jobs and other information
http://master:4040/jobs/

Installing the spark1.3.1 stand-alone environment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.