Python Development sparksql Application

Source: Internet
Author: User

Preparation conditions:

    1. Deploying Hadoop clusters

    2. Deploying Spark clusters

    3. Install Python (i installed the Anaconda3,python is 3.6)


To configure environment environment variables:

Vi. BASHRC #添加如下内容export spark_home=/opt/spark/currentexport pythonpath= $SPARK _home/python/: $SPARK _home/python/lib /py4j-0.10.4-src.zip

Ps:spark inside will bring a pyspark module, but I am the official download spark2.1 Pyspark and python3.6 incompatible, there is a bug, if crossing with Python3, suggest to githup download the latest Pyspark Replace the pyspark below the $spark_home/python directory.


Turn on the monster upgrade:

1. Start the Hadoop cluster and the spark cluster

650) this.width=650; "Src=" https://s4.51cto.com/wyfs02/M02/8E/81/wKiom1jCVlixB8kQAAGFqgUZuCI747.png-wh_500x0-wm_ 3-wmp_4-s_725551745.png "style=" Float:none; "title=" Hadoop.png "alt=" wkiom1jcvlixb8kqaagfqguzuci747.png-wh_50 "/ >

650) this.width=650; "Src=" https://s5.51cto.com/wyfs02/M01/8E/80/wKioL1jCVlmivTckAAJOWt1bsw4843.png-wh_500x0-wm_ 3-wmp_4-s_452520904.png "style=" Float:none; "title=" Spark.png "alt=" wkiol1jcvlmivtckaajowt1bsw4843.png-wh_50 "/ >

2. Upload data to the Hadoop file system, People.json is the official case data, Salary.json is my own new data

Hadoop fs-mkdir-p/user/hadoop/examples/src/main/resources/hadoop fs-put people.json/user/hadoop/examples/src/main /resources/hadoop Fs-put salary.json/user/hadoop/examples/src/main/resources/

650) this.width=650; "Src=" https://s2.51cto.com/wyfs02/M00/8E/80/wKioL1jCVW2BmKrnAADD5olPo3M605.png-wh_500x0-wm_ 3-wmp_4-s_1017985627.png "title=" Data.png "alt=" Wkiol1jcvw2bmkrnaadd5olpo3m605.png-wh_50 "/>

3. Writing a Python sparksql program

# -*- coding: utf-8 -*-"" "created on wed feb 22 15:07:44  2017 practice Sparksql@author: wanghuan "" "from pyspark.sql import sparksessionspark =  SparkSession.builder.master ("spark://cent0s7master:7077"). AppName ("python spark sql basic  Example "). config (" spark.some.config.option ", " Some-value ")  .getorcreate () #ssc =sparkcontext (" local[2 ") "," Sparksqltest ") Peopledf = spark.read.json (" Examples/src/main/resources/people.json ") salaryDF  = spark.read.json ("Examples/src/main/resources/salary.json") #peopleDF. Printschema () # creates a  temporary view using the dataframepeopledf.createorreplacetempview ("People") Salarydf.createorreplacetempview ("salary") # sql statements can be run by  Using the sql methods provided by sparkteenagernamesdf = spark.sql (" Select a.name,a.age,b.sAlary from people a,salary b where a.name=b.name and a.age <30  and b.salary>5000 ") Teenagernamesdf.show ()

4. Running the Sparksql application

650) this.width=650; "Src=" https://s2.51cto.com/wyfs02/M00/8E/80/wKioL1jCWASTDnHIAAD6heGNgnw930.png-wh_500x0-wm_ 3-wmp_4-s_2504187455.png "title=" executes. png "alt=" wkiol1jcwastdnhiaad6hegngnw930.png-wh_50 "/>

650) this.width=650; "Src=" https://s2.51cto.com/wyfs02/M01/8E/81/wKiom1jCWGTBWM6_AAG0caMnK84363.png-wh_500x0-wm_ 3-wmp_4-s_873221090.png "title=" Finished.png "alt=" Wkiom1jcwgtbwm6_aag0camnk84363.png-wh_50 "/>

Run took 42 seconds (this execution time I feel a bit long, should be with my virtual machine performance is not how to relate, I am a Dell notebook run four virtual machine), the results came out, 19-year-old Justin wages to 10000, really young ah.


PS: I originally intended to use Java or Scala to develop the spark application, but, configuration development environment is really sad process, the most troublesome is the Scala compilation environment, SBT or maven download a lot of packages, foreign packages download not down (why everyone understand). I can only turn to the explanatory Python to write, at least not download the foreign compilation package.

Python Development sparksql Application

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.