Install spark
Spark must be installed on the master, slave1, and slave2 machines.
First, install spark on the master. The specific steps are as follows:
Step 1: Decompress spark on the master:
Decompress the package directly to the current directory:
In this case, create the
process.Resilient distributed data setsOne of the concepts that spark repeats around is the elastic distributed data set. It is a collection of elements with a fault-tolerant mechanism and can be manipulated in parallel. There are two ways to create Rdds. Parallelize A collection that already exists in your driver, or reference a dataset of an external storage system, such as a shared file system, HDFS, HB
1, first download the image to local. https://hub.docker.com/r/gettyimages/spark/~$ Docker Pull Gettyimages/spark2, download from https://github.com/gettyimages/docker-spark/blob/master/docker-compose.yml to support the spark cluster DOCKER-COMPOSE.YML fileStart it$ docker-compose Up$ docker-compose UpCreating spark_master_1Creating spark_worker_1Attaching to Sp
command:Add the following content, including the bin directory to the pathMake it effective with source1.4 Verification
The input Scala version can be displayed as follows:Scala can also be programmed directly with Scala:2. Install Spark 2.1 Downloads Spark
Download Address:Http://spark.apache.org/downloads.htmlFor learning purposes, I downloaded the pre-compiled version 1.6.2.2 Decompression
The download
Step 1: Test spark through spark Shell
Step 1:Start the spark cluster. This is very detailed in the third part. After the spark cluster is started, webui is as follows:
Step 2: Start spark shell:
In this case, you can view the shell in the following Web console:
S
Introduction to spark Basics, cluster build and Spark ShellThe main use of spark-based PPT, coupled with practical hands-on to enhance the concept of understanding and practice.Spark Installation DeploymentThe theory is almost there, and then the actual hands-on experiment:Exercise 1 using Spark Shell (native mode) to
Spark Communication Module
1, Spark Cluster Manager can have local, standalone, mesos, yarn and other deployment methods, in order to
Centralized communication mode
1, RPC remote produce call
Spark Communication mechanism:
The advantages and characteristics of Akka are as follows:
1, parallel and distributed: Akka in design with asynchronous communication and dis
Step 4: build and test the spark development environment through spark ide
Step 1: Import the package corresponding to spark-hadoop, select "file"> "project structure"> "Libraries", and select "+" to import the package corresponding to spark-hadoop:
Click "OK" to confirm:
Click "OK ":
After idea
1. Introduction to Spark streaming
1.1 Overview
Spark Streaming is an extension of the Spark core API that enables the processing of high-throughput, fault-tolerant real-time streaming data. Support for obtaining data from a variety of data sources, including KAFK, Flume, Twitter, ZeroMQ, Kinesis, and TCP sockets, after acquiring data from a data source, you can
Open idea under the SRC under main under Scala right click to create a Scala class named Simpleapp, the content is as followsImportOrg.apache.spark.SparkContextImportOrg.apache.spark.sparkcontext._ImportOrg.apache.spark.SparkConfObjectSimpleapp{defMain(Args:array[string]) {ValLogFile ="/home/spark/opt/spark-1.2.0-bin-hadoop2.4/readme.md"//should be some file on your system Valconf =NewSparkconf (). Setap
restart
After the dagschedstage stage is divided, the task is submitted to TaskScheduler in the unit of TaskSet:
1. One taskschednext serves only one sparkConext.
2. After receiving the TaskSet, it submits the task to the Worker node Executor for running. Failed task
It is monitored and restarted by TaskScheduler.
Executor is run in multiple threads. Each thread is responsible for one task.
Next, we will track the source code of an example provided by sp
Step 1: software required by the spark cluster;
Build a spark cluster on the basis of the hadoop cluster built from scratch in Articles 1 and 2. We will use the spark 1.0.0 version released in May 30, 2014, that is, the latest version of spark, to build a spark Cluster Based
Zhou Zhihu L.Holiday, finally can spare time to update the blog ....1. Get DataThis article provides a detailed introduction to Sparksql's content by using the Spark project git log on GitHub as the data.The Data Acquisition command is as follows:[[emailprotected] spark]# git log --pretty=format:‘{"commit":"%H","author":"%an","author_email":"%ae","date":"%ad","message":"%f"}‘ > sparktest.jsonThe output of
four operators used in the above example are mapped to four operator types. The Spark program works in two spaces: Spark Rdd Space and Scala native data space. In the native data space, the data is represented as scalar (scalar, the Scala basic type, represented by a small orange square), a collection type (a blue dashed box), and a persistent store (a red cylinder).This paper describes the operation of th
Open idea under the SRC under main under Scala right click to create a Scala class named Simpleapp, the content is as followsOrg.apache.spark.SparkContext org.apache.spark.sparkcontext._ org.apache.spark.SparkConf"a"). Count () numbs = logdata.filter (line = Line.contains ("B")). Count () println ("Lines with a:%s, Lines with B:%s". Format (Numas, numbs))}}
Packaging files:File-->>projectstructure-click artificats-->> click the Green Plus-click jar-->> Select from module with Depe
Start and view the cluster status
Step 1: Start the hadoop cluster, which is explained in detail in the second lecture. I will not go into details here:
After the JPS command is run on the master machine, the following process information is displayed:
When JPS is used on slave1 and slave2, the following process information is displayed:
Step 2: Start the spark Cluster
On the basis of the successful start of the hadoop cluster, to start the
Tags: android style http color java using IO strongLiaoliang Spark Open Class Grand forum Phase I: Spark has increased the speed of cloud computing big data by more than 100 times times http://edu.51cto.com/lesson/id-30816.htmlSpark Combat Master Road Series Books http://down.51cto.com/tag-Spark%E6%95%99%E7%A8%8B.htmlLiaoliang Teacher (email [email protected] pho
Step 2: Use the spark cache mechanism to observe the Efficiency Improvement
Based on the above content, we are executing the following statement:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/49/AF/wKioL1QY8tmiGO95AAG6MKKe5vI885.jpg "style =" float: none; "Title =" 1.png" alt = "wkiol1qy8tmigo95aag6mkke5vi885.jpg"/>
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/49/AD/wKiom1QY8sLjnB_KAAHXbDhuD_I646.jpg "style =" float
Step 2: Use the spark cache mechanism to observe the Efficiency Improvement
Based on the above content, we are executing the following statement:
It is found that the same calculation result is 15.
In this case, go to the Web console:
The console clearly shows that we performed the "count" Operation twice.
Now we will execute the "Sparks" variable for the "cache" Operation:
Run the Count operation to view the Web console:
At this tim
Step 2: Use the spark cache mechanism to observe the Efficiency Improvement
Based on the above content, we are executing the following statement:
It is found that the same calculation result is 15.
In this case, go to the Web console:
The console clearly shows that we performed the "count" Operation twice.
Now we will execute the "Sparks" variable for the "cache" Operation:
Run the Count operation to view the Web console:
At this time, we found
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.