Share with you what spark is? How to analyze data with spark, and small partners who are interested in big data to learn about it.Big Data Online LearningWhat is Apache Spark?Apache Spark is a cluster computing platform designed for speed and general purpose.From a speed point of view,
Save and run the source command to make the configuration file take effect.
Step 3: Run idea and install and configure the idea Scala development plug-in:
The official document states:
Go to the idea bin directory:
Run "idea. Sh" and the following page appears:
Select "Configure" To Go To The idea configuration page:
Select plugins To Go To The plug-in installation page:
Click the "Install jetbrains plugin" option in the lower left corner to go to the following page:
Enter "Scala"
Modify the source code of our "firstscalaapp" to the following:
Right-click "firstscalaapp" and choose "Run Scala console". The following message is displayed:
This is because we have not set the JDK path for Java. Click "OK" to go to the following view:
In this case, select the "project" option on the left:
In this case, we select "new" of "No SDK" to select the following primary View:
Click the JDK option:
Select the JDK directory we installed earlier:
Click "OK"
Click OK:
Click the f
-site.xml configuration can refer:
Http://hadoop.apache.org/docs/r2.2.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
Step 7 modify the profile yarn-site.xml, as shown below:
Modify the content of the yarn-site.xml:
The above content is the minimal configuration of the yarn-site.xml, the content of the yarn-site.xml file configuration can be referred:
Http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
[
Label: style blog http OS Using Ar Java file sp Download the downloaded"Hadoop-2.2.0.tar.gz "Copy to"/Usr/local/hadoop/"directory and decompress it: Modify the system configuration file ~ /Configure "hadoop_home" in the bashrc file and add the bin folder under "hadoop_home" to the path. After modification, run the source command to make the configuration take effect. Next, create a folder in the hadoop directory using the following command: Next, modify the hadoop configuration file. F
Label: style blog http OS use AR file SP 2014
7. perform the same hadoop 2.2.0 operations on sparkworker1 and sparkworker2 as sparkmaster. We recommend that you use the SCP command to copy the hadoop content installed and configured on sparkmaster to sparkworker1 and sparkworker2;
8. Start and verify the hadoop distributed Cluster
Step 1: format the HDFS File System:
Step 2: Start HDFS in sbin and execute the following command:
The startup process is as follows:
At this point, we
Copy the downloaded hadoop-2.2.0.tar.gz to the "/usr/local/hadoop/" directory and decompress it:
Modify the system configuration file ~ /Configure "hadoop_home" in the bashrc file and add the bin folder under "hadoop_home" to the path. After modification, run the source command to make the configuration take effect.
Next, create a folder in the hadoop directory using the following command:
Next, modify the hadoop configuration file. First, go to the hadoop 2.2.0 configuration file area:
Download the downloaded"Hadoop-2.2.0.tar.gz "Copy to"/Usr/local/hadoop/"directory and decompress it: Modify the system configuration file ~ /Configure "hadoop_home" in the bashrc file and add the bin folder under "hadoop_home" to the path. After modification, run the source command to make the configuration take effect. Next, create a folder in the hadoop directory using the following command: \Next, modify the hadoop configuration file. First, go to the hadoop 2.2.0 configuration file
Start and view the cluster status
Step 1: Start the hadoop cluster, which is explained in detail in the second lecture. I will not go into details here:
After the JPS command is run on the master machine, the following process information is displayed:
When JPS is used on slave1 and slave2, the following process information is displayed:
Step 2: Start the spark Cluster
On the basis of the successful start of the hadoop cluster, to start the
1.1 spark Interactive Analysis Start HDFS and yarn of hadoop before running the spark script. Spark shell provides It also has a powerful tool to analyze data interactively. The two languages have such exchange capabilities: Scala and python. The following shows how to use python to analyze data files. Go to the spark
Http://www.cnblogs.com/shishanyuan/archive/2015/08/19/4721326.html
1, spark operation structure 1.1 term definitions
LApplication: The Spark application concept is similar to that of the Hadoop mapreduce, which refers to a user-written Spark application that contains a driver Functional code and executor code that runs on multiple nodes in a cluster;
LDrive
class (according to the CLK. TSV Data Format)
Case class click (D: Java. util. Date, UUID: String, landing_page: INT)
// Load the file Reg. TSV on HDFS and convert each row of data to a register object;
Val Reg = SC. textfile ("HDFS: // chenx: 9000/week2/join/Reg. TSV "). map (_. split ("\ t ")). map (r => (r (1), register (format. parse (R (0), R (1), R (2), R (3 ). tofloat, R (4 ). tofloat )))
// Load the CLK. TSV file on HDFS and convert each
distributed system, and maximize the performance. At the end of the program, you must call the stop method to disconnect the environment.
Method textFile reads a text file and creates an RDD set in the Spark environment. This dataset is stored in the lines variable. The flatMap method is different from the map method. The map returns a key-value pair, and the obtained RDD set and hash table are somewhat s
Hadoop inputformats (for example, HDFs files) or from other Rdds conversions. Let's create a new rdd from the readme.md text file in the Spark source code directory.
scala> val textfile = Sc.textfile ("file:///home/hadoop/hadoop/spark/README.md") 16/07/24 03:30:53 INFO Storage . Memorystore:ensurefreespace (217040) called with curmem=321016 maxmem=280248975 16/
source code
/** Distribute a local Scala collection to form an RDD.* Allocate a local scala set from RDD* @ Note Parallelize acts lazily. If 'seq 'is a mutable collection and is* Altered after the call to parallelize and before the first action on* RDD, the resultant RDD will reflect the modified collection. Pass a copy* The argument to avoid this.*/Def parallelize [T: ClassTag] (seq: Seq [T], numSlices: Int = defaultParallelism): RDD [T] = {New ParallelCollectionRDD [T] (this, seq, nu
rdd.To create a new RDD:>>> textfile = Sc.textfile ("readme.md")The RDD supports two types of operations, actions, and transformations:Actions: Return a value after running a calculation on a datasetTransformations: Transform, create a new dataset from an existing datasetThe RDD can have a sequence of actions (actions) that can return a value (values), a transform (transformations), or a pointer to a new RDD. Learn some of the simple actions of the R
localhost:59627 (size: 28.9 KB, free:265.4 MB) 15/05/05 06:30:35 INFO storage. blockmanagermaster:updated info of block Broadcast_0_piece0 15/05/05 06:30:35 info Spark. Defaultexecutioncontext:created broadcast 0 from Textfile at Spark SQL supported import JSON format, save for later test use, refer to Here ---------------------------------------------Ornate Sp
Tags: spark books spark hotspot Spark Technology spark tutorial
The command to end historyserver is as follows:
Step 4: Verify the hadoop distributed Cluster
First, create two directories on the HDFS file system. The creation process is as follows:
/Data/wordcount in HDFS is used to store the data f
Next package, use Project structure's artifacts:Using the From modules with dependencies:Select Main Class:Click "OK":Change the name to Sparkdemojar:Because Scala and spark are installed on each machine, you can delete both Scala and spark-related jar files:Next Build:Select "Build Artifacts":The rest of the operation is to upload the jar package to the server, and then execute the
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.