Apache Spark Source 2--Job submission and operation

Source: Internet
Author: User

Reprinted from: http://www.cnblogs.com/hseagle/p/3673123.html

Overview

This article takes WordCount as an example, detailing the process by which Spark creates and runs a job, with a focus on process and thread creation.

Construction of experimental environment

Ensure that the following conditions are met before you proceed with the follow-up operation.

    1. Download Spark binary 0.9.1
    2. Install Scala
    3. Installing SBT
    4. Installing Java
Start Spark-shell stand-alone mode operation, which is local mode

Local mode is very simple to run, just run the following command, assuming the current directory is $spark_home

Master=local Bin/spark-shell

"Master=local" means that it is currently running in stand-alone mode

Local Cluster mode operation

Local cluster mode is a pseudo-cluster mode, in a single-machine environment to simulate the standalone cluster, the boot sequence is as follows

    1. Start Master
    2. Start worker
    3. Start Spark-shell
Master
$SPARK _home/sbin/start-master.sh

Note the output of the runtime, which is saved in the $spark_home/logs directory by default.

Master is mainly run class Org.apache.spark.deploy.master.Master, start listening on port 8080, log as shown

Modify Configuration
    1. Enter the $spark_home/conf directory
    2. Rename Spark-env.sh.template to spark-env.sh
    3. To modify spark-env.sh, add the following:
Export Spark_master_ip=localhostexport Spark_local_ip=localhost
Running worker
bin/spark-class org.apache.spark.deploy.worker.Worker spark://localhost:7077 -i 127.0.0.1  -c 1 -m 512M

The worker starts to complete and connects to master. Open the Maser Web UI to see the worker that is connected. The Master WEb UI has a listening address of http://localhost:8080

Start Spark-shell
master=spark://localhost:7077 Bin/spark-shell

If all goes well, you will see the following message.

Created Spark context. Spark context available as SC.

You can open localhost:4040 in your browser to see the following:

    1. Stages
    2. Storage
    3. Environment
    4. Executors
WordCount

After the environment is ready, let's run the simplest example in Sparkshell and enter the following code in Spark-shell

Scala>sc.textfile ("Readme.md"). Filter (_.contains ("Spark")). Count

The code above counts the number of lines in readme.md that contain spark

Detailed deployment process

The components in the spark layout environment are as shown.

    • Driver Program briefly describes the Driver program that corresponds to the WordCount statement entered in the Spark-shell.
    • The Cluster Manager is the one that corresponds to the above mentioned master, which is primarily the role of the Deploy management
    • The Worker node is slave node compared to master. Each executor,executor running above can correspond to a thread. Executor handles two basic business logic, one is driver programme, the other is that the job is split into stages after submission, and each stage can run one or more tasks

Notes: in cluster (cluster) mode, Cluster Manager runs in a JVM process, while the worker is running in another JVM process. In the local cluster, these JVM processes are in the same machine, and if they are real standalone or mesos and yarn clusters, the worker and master are distributed on different hosts.

Job Generation and operation

The simple process of job generation is as follows

    1. The application first creates an instance of Sparkcontext, such as an instance of SC
    2. Use Sparkcontext instances to create an RDD
    3. After a series of transformation operations, the original RDD is converted into other types of RDD
    4. The Runjob method of Sparkcontext is called when the action acts on the post-conversion rdd.
    5. The call to Sc.runjob is the starting point for a sequence of responses, and the key jumps occur here.

The call path is roughly as follows

    1. Sc.runjob->dagscheduler.runjob->submitjob
    2. Dagscheduler::submitjob creates an event for jobsummitted to be sent to the inline class Eventprocessactor
    3. Eventprocessactor calls the processevent handler after receiving the jobsubmmitted
    4. Job-to-stage conversion, generating finalstage and committing to run, the key is to call submitstage
    5. The dependencies between the stages are calculated in Submitstage, and the dependencies are divided into two types of wide dependencies and narrow dependencies .
    6. If no dependencies are found in the current stage in the calculation or if all dependencies are ready, the task is committed
    7. Commit task is called function submitmissingtasks to complete
    8. The task actually runs on which worker is managed by TaskScheduler, which means that the submitmissingtasks above calls Taskscheduler::submittasks
    9. The corresponding backend is created in Taskschedulerimpl based on the current operating mode of spark, and if it is run on a single machine localbackend
    10. Localbackend received the receiveoffers event Taskschedulerimpl passed in.
    11. Receiveoffers->executor.launchtask->taskrunner.run

Code Snippet Executor.lauchtask

def launchtask (Context:executorbackend, Taskid:long, Serializedtask:bytebuffer) {    new  Taskrunner (context, taskId, Serializedtask)    runningtasks.put (taskId, tr)    Threadpool.execute (tr)  } 

To say such a chase, that is to say, the final logical processing is really happening in Taskrunner such a executor within.

Apache Spark Source 2--Job submission and operation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.