The Schemardd from spark1.2 to Spark1.3,spark SQL has changed considerably from Dataframe,dataframe to Schemardd, while providing more useful and convenient APIs.
When Dataframe writes data to hive, the default is hive default database, Insertinto does not specify the parameters of the database, this article uses the following method to write data to the hive table or the partition of the Hive table, for reference only.
1. Write the dataframe data to the hive table
You can see from the Dataframe class that the write APIs associated with the Hive table are as follows:
Registertemptable (tablename:string): unit,
Insertinto (tablename:string): unit
Insertinto (tablename:string, Overwrite:boolean): unit
Saveastable (tablename:string, source:string, Mode:savemode, options:map[string, String]): unit
There are many overloaded functions that do not enumerate
The Registertemptable function is to create a spark temporary table
Insertinto function is to write data to a table, you can see that this function can not specify the database and partition information, can not write directly.
Writing data to the Hive Data Warehouse must specify the database, hive the data table can be established on the hive, or use Hivecontext.sql ("CREATE Table ...")
The following statement writes data to the specified database data table:
1 2 3 4 5 6 7 |
Case class Person (Name:string,col1:int, col2:string) val sc = new org.apache.spark.SparkContext val hivecontext = NE W Org.apache.spark.sql.hive.HiveContext (SC) import hivecontext.implicits._ hivecontext.sql ("Use DataBaseName") Val data = Sc.textfile ("path"). Map (X=>x.split ("\\s+")). Map (X=>person (x (0), X (1). Toint,x (2)) DATA.TODF (). Insertinto ("tablename") |
Create a case class to convert the data type in the RDD to a case class type, then TODF to Dataframe, and then invoke the Insertinto function to specify the database first, using Hivecontext.sql ("Use DataBaseName" ) statement, you can write the Dataframe data to the Hive data table.
2. Write the dataframe data to the partition of the hive specified datasheet
Hive data tables can be established on hive, or using Hivecontext.sql ("CREATE Table ..."), the data storage format is limited when using saveastable, the default format is parquet, and can be specified as JSON. If there are other formats specified, try to use the statement to create the Hive table.
The idea of writing data to a partitioned table is to write the Dataframe data to a temporary table first, and then to write the data to the Hive partition table by the HIVECONTEXT.SQL statement. The specific actions are as follows:
1 2 3 4 5 6 7 8 |
case class person (Name:string,col1:int, col2:str ing) val sc = new org.apache.spark.sparkcontext val hivecontext = new Org.apache.spark.sql.hive.HiveContext ( SC) Import hivecontext.implicits._ |