Spark Runtime Environment
Spark is written in Scala and runs on the JVM. So the operating environment is JAVA6 or above.
If you want to use the Python API, you need to install the Python interpreter version 2.6 or above.
Currently, Spark (1.2.0 version) is incompatible with Python 3.
Spark Download
: http://spark.apache.org/downloads.html, select pre-built for Hadoop 2.4 and later this package, click Direct Download, This will download a spark-1.2.0-bin-hadoop2.4.tgz compressed package.
Building spark does not require Hadoop, and if you have a Hadoop cluster or HDFS, you can download the appropriate version.
Decompression: TAR-ZXVF spark-1.2.0-bin-hadoop2.4.tgz
Spark's shells
The shell of Spark enables you to work with data that is distributed across the cluster (which can be distributed on the hard disk or in memory).
Spark can load data into the memory of a working node, so many distributed processing (even the processing of distributed 1T data) can be done in a matter of seconds.
The above characteristics, so that iterative calculation, real-time query, analysis can generally be done in shells. Spark offers python shells and Scala shells.
Open the Scala Shell for Spark:
To the Spark catalog Bin/pysparkbin/spark-shell open the Scala version of the shell
Example:
scala> val lines = Sc.textfile (".. /.. /testfile/hellospark ")//create an RDD called lines
Lines:org.apache.spark.rdd.rdd[string] =.. /.. /testfile/hellospark mappedrdd[1] at Textfile at:12
Scala> Lines.count ()//count the number of rows in this Rdd
Res0:long = 2
Scala> Lines.first ()//The first line in the file
res1:string = Hello Spark
Modify Log level: Conf/log4j.properties Log4j.rootcategory=warn, console
Spark's core concept
Driver Program:
Contains the program's main () method, the definition and operation of the Rdds. (in the example above, the driver program is the spark shell itself)
It manages many nodes, which we call executors.
The count () operation is interpreted (one part of each executor calculation file, and the last merge).
Sparkcontext:
Driver programs accesses Spark,sparkcontext objects through a Sparkcontext object to represent a connection to a cluster.
Sparkcontext is automatically created in the shell, which is SC,
RDDs:
In Spark, we compute through distributed collections (distributed collections, or Rdds), which are distributed across the cluster in parallel.
RDDs is the underlying abstract class that spark distributes data and calculations.
Create Rdds with Sparkcontext
The above example uses Sc.textfile () to create an rdd, called lines, which is created from our native text file, which represents each line of a text file. We can perform various parallelization operations on the RDD, such as counting the number of elements in a dataset or printing out the first line.
Spark Ppt:http://pan.baidu.com/s/1i3ikdod
Spark Learning website www.bigdatastudy.cn
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Spark Primer first Step Spark basics