How Spark is deployed

Source: Internet
Author: User

The pattern that spark distinguishes several clusters has been changed by configuration changes.

After use, it is found that distinguishing between these modes is the master specified when the command is started.

Now keep my configuration file intact.

#公共配置export Scala_home=/usr/local/scala/export Java_home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65.x86_64/export Spark_local_dirs=/usr/local/spark-1.5.1/export spark_conf_dir= $SPARK _local_dirs/conf/export SPARK_PID_DIR= $SPARK _local_dirs/pid_file/#YARNexport hadoop_home=/usr/local/hadoop-2.6.0export hadoop_conf_dir= $HADOOP _home/etc/ hadoop/#standalone #export Spark_master_ip=a01.dmp.ad.qa.vm.m6#export spark_master_port=7077# The number of CPU cores required per worker process #export spark_worker_cores=4# the memory size required for each worker process #export spark_worker_memory=6g# Number of worker processes running on each worker node #export spark_worker_instances=1#work performing a task using the local disk location #export spark_worker_dir= $SPARK _local_ Dirs/local#web UI Port Export Spark_master_webui_port=8099#spark History Server Configuration Export spark_history_opts= "- dspark.history.retainedapplications=20-dspark.history.fs.logdirectory=hdfs://a01.dmp.ad.qa.vm.m6:9000/user/ Spark/applicationhistory "

We played a spark-shell in the way we used standalone.

Execute the following command at the command line:

$ Spark-shell--master Spark://a01.dmp.ad.qa.vm.m6.youku:7077

See the Spark UI page first

See the shell that you just started in the running applicatioin of the spark UI.

Look at the Hadoop task management page

There are no running tasks.

Use spark on yarn to start another Spark-shell

$ Spark-shell--master yarn-client

Look at the top 2 pages. The job management interface for yarn Discovery now has a running app, and the Spark Job Management page doesn't have a running app.

at this point, I know. What we used to say #你的spark是装的standalone的么 # #你的spark是装的on Yarn # This argument is not true.

The spark task has been submitted in any way, which is the master specified when the command was submitted, not the configuration control

How Spark is deployed

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.