Compiling spark1.6.1 source code

Source: Internet
Author: User

compiling spark1.6.1 source code

  Normally, the release packages downloaded from the Spark's official website are ready for normal use (hive is supported by default), but if you want to compile the corresponding CDH version of Hadoop or pack the ganglia in, then you need to re-specify the compilation parameters to recompile the source code. It is recommended that you compile in a Linux environment.

1. Source code Download

Official website: https://spark.apache.org/downloads.html

Note: The source directory should not exist in the Chinese path

2. Install and configure Maven

According to the official website, the MAVEN version is required to compile spark1.6.1 with maven, requiring maven3.3.3+ and Java 7+. This compilation uses the maven3.3.9,jdk1.7

Maven Download: http://maven.apache.org/download.cgi

JDK Download: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

Download the installation MAVEN,JDK and configure the environment, the specific steps are no longer detailed.

3. Compile and package spark

It is necessary to set MAVEN to use memory before compiling, otherwise it will overflow during compilation, if it is a Linux system execute the following command:

Export maven_opts="-xmx2g-xx:maxpermsize=512m-xx:reservedcodecachesize=512m"

Execute the following command under Windows system:

Set maven_opts=-xmx2g-xx:maxpermsize=512m-xx:reservedcodecachesize=512m

Execute the following command to start compiling the appropriate CDH version and to support ganglia,hive:

MVN  -phadoop-2.6 -dhadoop.version=2.6.  0-cdh5. 4.2 -phive-phive-thriftserver-pyarn-pspark-ganglia-lgpl-dskiptests-dmaven.test.skip=true -E Clean Package

The compilation process may be due to network reasons can not solve the package dependencies, try to recompile, there is a patient wait ~ ~. This compilation is smooth, during the period because the MAVEN version too low errors, but after the upgrade resolved.

After success can see Spark-assembly-1.6.1-hadoop2.6.0-cdh5.4.2.jar under the $spark_home/assembly/target/

You can also package a build deployment package with the following command:

./make-distribution. SH 2.6. 0-cdh5. 4.2 --tgz-phadoop-2.6 -dhadoop.version=2.6. 0-cdh5. 4.2 -PHIVE-PHIVE-THRIFTSERVER–PYARN-PSPARK-GANGLIA-LGPL

Make-distribution with the compilation process, after success can be seen under the $spark_home/ spark-1.6.1-bin-2.6.0-cdh5.4.2.tgz

4. Reference

Https://spark.apache.org/docs/latest/building-spark.html

Compiling spark1.6.1 source code

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.