Spark Cluster deployment

Source: Internet
Author: User
Keywords cluster deployment nbsp; running installing

1. Introduction to Installation Environment

Hardware environment: Two four cores CPU, 4G memory, 500G hard disk virtual machine.

Software Environment: 64-bit http://www.aliyun.com/zixun/aggregation/13835.html ">ubuntu12.04 LTS; host name SPARK1, SPARK2,IP address respectively 1**.1 *.**.***/***。 The JDK version is 1.7. Hadoop2.2 has been successfully deployed on the cluster, and the detailed deployment process can be seen in the installation and deployment of another document yarn.

2. Install Scala2.9.3

1 run the wget http://www.scala-lang.org/downloads/distrib/files/scala-2.9.3.tgz command under the/home/test/spark directory to download the Scala binary package.

2 Extract downloaded files, configure environment variables: Edit/etc/profile file, add the following:

Export scala_home=/home/test/spark/scala/scala-2.9.3

Export Path= $SCALA _home/bin

3 run Source/etc/profile to make changes to environment variables take effect immediately. Perform the same operation on the SPARK2 and install Scala.

3. Download the compiled spark file, address: http://d3kbcqa49mib13.cloudfront.net/spark-0.8.1-incubating-bin-hadoop2.tgz. Unzip after downloading.

4. Configure the CONF/SPARK-ENV.SH environment variable and add the following:

Export scala_home=/home/test/spark/scala/scala-2.9.3

5. Configure Spark_examples_jar and Spark environment variables in/etc/profile: Add the following:

Export SPRAK_EXAMPLES_JAR=/HOME/TEST/SPARK/SPARK-0.8.1-INCUBATING-BIN-HADOOP2/EXAMPLES/TARGET/SCALA-2.9.3/ Spark-examples_2.9.3-assembly-0.8.1-incubating.jar

Export SPARK_HOME=/HOME/TEST/SPARK/SPARK-0.8.1-INCUBATING-BIN-HADOOP2

Export Path= $SPARK _home/bin

6. Modify the/conf/slaves file and add the following contents to the file:

Spark1

Spark2

7. Use the SCP command to copy the above files to the same path under the Spark node SCP-RSPARK-0.8.1-INCUBATING-BIN-HADOOP2 Test@spark2:/home/test/spark:

8. Start the spark cluster on the SPARK1 and check to see if the process started successfully. The following master and worker have successfully started.

You can see that the two slave nodes in the cluster have started successfully.

9. Running Spark's own example:./RUN-EXAMPLEORG.APACHE.SPARK.EXAMPLES.SPARKPI spark://master:7077, the results are as follows:

You can see in the Web interface that the job you just ran looks like this:

Original link: http://blog.csdn.net/zhxue123/article/details/19199859

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.