Spark Cluster Python Package management

Source: Internet
Author: User

Specific questions:

    1. Different data analysts/development teams require different versions of the Python version to perform pyspark.
    2. In the same Python version, you need to install multiple Python libraries, or even different versions of libraries.

One workaround for Issue 2 is to package the Python dependent library into a *.egg file and use –py-files to load the egg file when running Pyspark or spark-submit. The problem with this solution is that many Python libraries contain native code, compile-time dependent on the platform, and for some complex dependent libraries (such as pandas)

1.github Download Pandas Https://codeload.github.com/pandas-dev/pandas/zip/master
2. Generate compiled Python setup.py Bdist_egg will create an egg.
3. If GCC is required, install GCC on its own
    -y install gcc gcc-c+ + kernel-devel      





Reference:
http://blog.csdn.net/gongbi917/article/details/52369025
http://blog.csdn.net/willdeamon/article/details/53159548

Spark Cluster Python Package management

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.