Hardware environment: Two four cores CPU, 4G memory, 500G hard disk virtual machine.
Software Environment: 64-bit http://www.aliyun.com/zixun/aggregation/13835.html ">ubuntu12.04 LTS; host name SPARK1, SPARK2,IP address respectively 1**.1 *.**.***/***。 The JDK version is 1.7. Hadoop2.2 has been successfully deployed on the cluster, and the detailed deployment process can be seen in the installation and deployment of another document yarn.
2. Install Scala2.9.3
1 run the wget http://www.scala-lang.org/downloads/distrib/files/scala-2.9.3.tgz command under the/home/test/spark directory to download the Scala binary package.
6. Modify the/conf/slaves file and add the following contents to the file:
Spark1
Spark2
7. Use the SCP command to copy the above files to the same path under the Spark node SCP-RSPARK-0.8.1-INCUBATING-BIN-HADOOP2 Test@spark2:/home/test/spark:
8. Start the spark cluster on the SPARK1 and check to see if the process started successfully. The following master and worker have successfully started.
You can see that the two slave nodes in the cluster have started successfully.
9. Running Spark's own example:./RUN-EXAMPLEORG.APACHE.SPARK.EXAMPLES.SPARKPI spark://master:7077, the results are as follows:
You can see in the Web interface that the job you just ran looks like this:
Original link: http://blog.csdn.net/zhxue123/article/details/19199859
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.