Two ways to convert Rdd into dataframe in Spark (implemented in Java and Scala, respectively)

Source: Internet
Author: User

One: Prepare the data source

Create a new Student.txt file under the project with the following contents:

1, Zhangsan,  2, Lisi,  3, Wanger,  4, Fangliu,

Two: Realize

Java Edition:

1. First create a new student bean object, implementing the serialization and ToString () method, with the following code:

import java.io.Serializable; @SuppressWarnings ("Serial")   Public classStudent implements Serializable {String sid;      String sname; intSage;  PublicString GetSID () {returnSID; }       Public voidSetsid (String sid) { This. Sid =SID; }       PublicString Getsname () {returnsname; }       Public voidsetsname (String sname) { This. sname =sname; }       Public intGetsage () {returnSage; }       Public voidSetsage (intSage) {           This. Sage =Sage; } @Override PublicString toString () {return "Student [sid="+ SID +", Sname="+ sname +", sage="+ Sage +"]"; }    }  

2. Conversion, the specific code is as follows

import java.util.ArrayList;  Import org.apache.spark.SparkConf;  Import Org.apache.spark.api.java.JavaRDD;  Import Org.apache.spark.sql.Dataset;  Import Org.apache.spark.sql.Row;  Import Org.apache.spark.sql.RowFactory;  Import Org.apache.spark.sql.SaveMode;  Import org.apache.spark.sql.SparkSession;  Import Org.apache.spark.sql.types.DataTypes;  Import Org.apache.spark.sql.types.StructField;    Import Org.apache.spark.sql.types.StructType;  Public classTxttoparquetdemo { Public Static voidMain (string[] args) {sparkconf conf=NewSparkconf (). Setappname ("Txttoparquet"). Setmaster ("Local"); Sparksession Spark=sparksession.builder (). config (conf). Getorcreate (); Reflecttransform (spark);//Java ReflectionDynamictransform (Spark);//Dynamic Conversion    }            /** * Convert through Java reflection * @param spark*/      Private Static voidReflecttransform (Sparksession Spark) {Javardd<String> Source = Spark.read (). Textfile ("StuInfo.txt"). Javardd (); Javardd<Student> Rowrdd = Source.map (Line-{String parts[]= Line.split (","); Student Stu=NewStudent (); Stu.setsid (parts[0]); Stu.setsname (parts[1]); Stu.setsage (integer.valueof (parts[2])); returnStu;                    }); Dataset<Row> df = spark.createdataframe (Rowrdd, Student.class); DF.Select("Sid","sname","Sage"). COALESCE (1). write (). Mode (Savemode.append). Parquet ("Parquet.res"); }      /** * Dynamic conversion * @param spark*/      Private Static voidDynamictransform (Sparksession Spark) {Javardd<String> Source = Spark.read (). Textfile ("StuInfo.txt"). Javardd (); Javardd<Row> Rowrdd = Source.map (Line-{string[] parts= Line.split (","); String Sid= parts[0]; String sname= parts[1]; intSage = Integer.parseint (parts[2]); returnrowfactory.create (SID, Sname, Sage          );                    }); ArrayList<StructField> fields =NewArraylist<structfield>(); Structfield Field=NULL; Field= Datatypes.createstructfield ("Sid", Datatypes.stringtype,true);          Fields.Add (field); Field= Datatypes.createstructfield ("sname", Datatypes.stringtype,true);          Fields.Add (field); Field= Datatypes.createstructfield ("Sage", Datatypes.integertype,true);                    Fields.Add (field); Structtype Schema=datatypes.createstructtype (fields); Dataset<Row> DF =Spark.createdataframe (Rowrdd, schema); Df.coalesce (1). write (). Mode (Savemode.append). Parquet ("parquet.res1"); }      }  

Scala version:

Import org.apache.spark.sql.SparkSession Import org.apache.spark.sql.types.StringType Import Org.apache.spark.sql.types.StructField Import Org.apache.spark.sql.types.StructType Import Org.apache.spark.sql.Row Import Org.apache.spark.sql.types.IntegerTypeObjectRdd2dataset { Case classStudent (Id:int,name:string,age:int) def main (Args:array[string]) {val Spark=sparksession.builder (). Master ("Local"). AppName ("Rdd2dataset"). Getorcreate () Import spark.implicits._ reflectcreate (Spark) Dynamiccreate (Spark)}/** * Convert through Java reflection * @param spark*/    Privatedef reflectcreate (spark:sparksession): unit={Import spark.implicits._ val sturdd=spark.sparkcontext.textfile ("Student2.txt")      //TODF () is an implicit conversionVal Studf=sturdd.map (_.split (","). Map (parts?). Student (Parts (0). Trim.toint,parts (1), Parts (2). trim.toint). TODF ()//studf.select ("id", "name", "Age"). Write.text ("result")//specifying column names for writing to a fileStudf.printschema () Studf.createorreplacetempview ("Student") Val Namedf=spark.sql ("select name from student where Age<20")      //nameDf.write.text ("result")//write query results to a filenamedf.show ()}/** * Dynamic conversion * @param spark*/    Privatedef dynamiccreate (spark:sparksession): unit={val Sturdd=spark.sparkcontext.textfile ("Student.txt") Import spark.implicits._ val schemastring="Id,name,age"Val Fields=schemastring.split (","). Map (FieldName = Structfield (FieldName, stringtype, nullable =true)) Val schema=structtype (Fields) Val Rowrdd=sturdd.map (_.split (","). Map (parts?). Row (Parts (0), Parts (1), Parts (2)) Val studf=Spark.createdataframe (Rowrdd, Schema) Studf.printschema () Val Tmpview=studf.createorreplacetempview ("Student") Val Namedf=spark.sql ("select name from student where Age<20")      //nameDf.write.text ("result")//write query results to a filenamedf.show ()}} 

Note: 1. All of the above code has been tested, and the test environment is spark2.1.0,jdk1.8.

2. This code does not apply to previous versions of spark2.0.

Two ways to convert Rdd into dataframe in Spark (implemented in Java and Scala, respectively)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.