Spark 1.1.0 installation test (Distributed yarn-cluster mode)

Source: Internet
Author: User

Spark version: spark-1.1.0-bin-hadoop2.4 (download: http://spark.apache.org/downloads.html)

For more information about the server environment, see the previous blogNotes on configuration of hbase centos production environment

(Hbase-R is ResourceManager; hbase-1, hbase-2, hbase-3 is nodemanager)

 

1. installation and configuration (yarn-cluster mode Documentation Reference: http://spark.apache.org/docs/latest/running-on-yarn.html)

Run the program in yarn-cluster mode. Spark saves the jar package to HDFS and runs the program on each nodemanager in a distributed manner through yarn configuration. In this mode, you do not need to specify the master and slaves of spark.

 

(1) install Scala

Download and install the RPM package

  

(1) This spark will be installed on all machines: hbase-0, hbase-R, hbase-1, hbase-2, hbase-3.

After decompression, copy the files in the directory to/hbase/spark. The path of the following configuration files is relative to the directory. After all the configurations are complete, the installation directory and environment variables will be copied to all machines.

(2) environment variables ,~ /. Bashrc

export SPARK_HOME="/hbase/spark"export SCALA_HOME="/usr/share/scala"

 

(3) set spark properties, CONF/spark-defaults.conf

# options for Yarn-cluster modespark.yarn.applicationMaster.waitTries          10spark.yarn.submit.file.replication              1spark.yarn.preserve.staging.files               falsespark.yarn.scheduler.heartbeat.interval-ms      5000spark.yarn.max.executor.failures                6spark.yarn.historyServer.address                hbase-r:10020spark.yarn.executor.memoryOverhead              512spark.yarn.driver.memoryOverhead                512

 

(4) set all machines on the firewall to access all ports through the Intranet (it is too difficult to set a specific port range separately. hadoop, hbase, spark, yarn, too many listening ports such as zookeeper ).

(3) test Java example

./bin/spark-submit --class org.apache.spark.examples.JavaSparkPi --master yarn-cluster --num-executors 3 --driver-memory 1024m  --executor-memory 1024m --executor-cores 1 lib/spark-examples*.jar 20

After running successfully, you can see

     yarnAppState: FINISHED     distributedFinalState: SUCCEEDED     appTrackingUrl: http://hbase-r:18088/proxy/application_1414738706972_0011/A

Access the apptrackingurl and you can see the following results. You can see finalstatus: succeeded.

                    Application Overview         User:       webadmin         Name:       org.apache.spark.examples.JavaSparkPi   Application Type: SPARK   Application Tags:        State:       FINISHED     FinalStatus:    SUCCEEDED       Started:      3-Nov-2014 15:17:19       Elapsed:      43sec     Tracking URL:   History     Diagnostics:   ApplicationMaster    Attempt Number       Start Time          Node     Logs   1                 3-Nov-2014 15:17:19 hbase-1:8042 logs

 

 

 

Spark 1.1.0 installation test (Distributed yarn-cluster mode)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.