Based on spark2.0 integration Spark-sql + MySQL + parquet + HDFS

Source: Internet
Author: User
Tags sql using

Overview of the changes made by Spark 2.0 you can refer to the official website and other information, here no longer repeat Since the spark1.x SqlContext is integrated into the sparksession in spark2.0, the use of Spark-shell client operations can be slightly different, as described in the following article


Second, spark additional configuration


1. Normal configuration no longer repeat, if you need to read the MySQL data, you need to add the JDBC driver jar package in the environment variable under the current user For example mine is: Mysql-connector-java-5.1.18-bin.jar storage path is $spark_home/jars so additional configuration environment variables are required

Export PATH = $PATH: $SPARK _home/jars


2. Start Spark-shell

Bin/spark-shell--master=spark://h4:7077--driver-class-path=./jars/mysql-connector-java-5.1.18-bin.jar--jars=./ Jars/mysql-connector-java-5.1.18-bin.jar


3. spark-sql using SQL to perform The normal startup can start the database through Spark-sql and switch to the current new database

Spark.sql ("CREATE Database Spark")

To see if the new success

Spark.sql ("Show Databases"). Show

Switch database after successful creation

Spark.sql ("Use Spark")

Now start reading remote MySQL data

Val sql = "" "CREATE TABLE student USING org.apache.spark.sql.jdbc OPTIONS ( ur L "Jdbc:mysql://worker2:3306/spark", dbtable "student", User "root", password "root "  )"""

Perform:

Spark.sql (SQL);


The table data is cached after waiting for execution to complete

Spark.sql ("Cache table student")

You can do this at this time, for example: val studentdf = Spark.sql ("Select Id,name from student")

After you complete the requirements query, you can save the results in parquet format to HDFs

StudentDF.write.parquet ("Hdfs://h4:9000/test/spark/parquet")

Can also be written in JSON format

StudentDF.write.json ("Hdfs://h4:9000/test/spark/json")

Third, expand


Cluster state, hardware configuration 32G memory 2T hard disk, spark with 4 cores, memory allocated 20G case, the test speed is as follows:27 million records of the table imported spark 1 seconds or less Sparksql it in JSON format into HDFs spents 288 seconds, a total of 1.0G, will be stored in parquet format HDFs time 207 seconds, a total of 86.6M, visible parquet advantage or more obvious

Based on spark2.0 integration Spark-sql + MySQL + parquet + HDFS

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.