1. Connect to MySQL
First, you need to copy the Mysql-connector-java-5.1.39.jar into the jars directory of Spark;
Scala> Import Org.apache.spark.sql.SQLContext
Import Org.apache.spark.sql.SQLContext
Scala> Val sqlcontext=new SqlContext (SC)
Warning:there was one deprecation warning; Re-run with-deprecation for details
SqlContext:org.apache.spark.sql.SQLContext = [email protected]
Scala> SqlContext.read.format ("jdbc"). Options (Map ("url", "Jdbc:mysql://localhost:3306/metastore",
| "Driver", "Com.mysql.jdbc.Driver", "dbtable", "DBS", "User", "root", "password", "root"). Load (). Show
+-----+--------------------+--------------------+-------+----------+----------+
| db_id| desc| db_location_uri| name| owner_name| owner_type|
+-----+--------------------+--------------------+-------+----------+----------+
| 1| Default Hive data...| hdfs://localhost:...| Default| public| role|
| 2| null|hdfs://localhost:...| aaa| Root| user|
| 6| null|hdfs://localhost:...| userdb| Root| user|
+-----+--------------------+--------------------+-------+----------+----------+
Structured data in Spark SQL