Description
Spark--version:2.2.0
I have two JSON files, namely EMP and dept:
The EMP content is as follows:
{"Name": "Zhangsan", "age": +, "depid": 1, "gender": "Male", "salary": 20000} {"name": "Lisi", "age": $, "Depid" : 2, "gender": "Female", "salary": 8500} {"name": "Wangwu", "age": All, "depid": 1, "gender": "Male", "salary": 500 0} {"name": "Zhaoliu", "age": +, "Depid": 3, "gender": "Male", "salary": 7000} {"name": "Marry", "age": 1 9, "Depid": 2, "gender": "Female", "salary": 6600} {"name": "Tom", "Age":, "Depid": 1, "gender": "Female", "Sal ary ":"the "name": "Kitty", "age": +, "Depid": 2, "gender": "Female", "salary": 6000} {"name": "Tony", "Age": $, "Depid": 4, "gender": "Female", "Salary": 4030}
Dept content is as follows:
{"id": 1, "name": "Tech Department"} {"id": 2, "name": "Fina Department"} {"id": 3, "name": "HR Department "}
Now I need to load two files through Sparksql and do a join, and finally save the results locally
Here are the steps to proceed:
1. Initialize Configuration
Val conf = new sparkconf (). Setmaster ("local[2]"). Setappname ("Load_data")= new Sparkcontext (conf)= New SQL. Sparksession.builder (). appName ("load_data_01"). Master ("local[2]") . Getorcreate () SC. Setloglevel ("error")//test environment to reduce the print point log, I set the log level to error
2. Load two JSON files in
Val df_emp = Ssc.read.json ("File:///e:\\javabd\\bd\\json_file\\employee.json")= Ssc.read. format ("JSON"). Load ("File:///e:\\javabd\\bd\\json_file\\department.json")
3. Print out the two JSON files that were loaded, and see if they were successfully loaded
Df_emp.show () df_dept.show ()
4, data loading is not a problem, the next two join operation:
Df_emp.join (Df_dept,df_emp ("depid") = = = Df_dept ("id"), "left"). Show ()
5, so the result can also print out normally, it seems that there is no problem, then directly on the save can be Bai, but the save when the error:
Df_emp.join (Df_dept,df_emp ("depid") = = = Df_dept ("id"), "left"). Write.mode (Savemode.append). CSV ("file:///e:\\ Javabd\\bd\\json_file\\rs ")
So started Baidu, found the reason, the Forum link, the general meaning is that to save the table has the same name field, so it is not possible, then the solution is very obvious, let two then the field name is not the same, then give them their alias, then start to modify the code:
1, the initialization configuration is unchanged
2. Read the file unchanged
3, and do not get to two DF (JSON file loaded after loading is two DF), and set the alias
Take out the column names of the two tables respectively Val c_emp = df_emp.columnsval c_dept = df_dept.columns//The aliases of the two tables are set val emp = Df_emp.select (C_emp.map (n =& Gt Df_emp (N). As ("Emp_" + N)): _*) Val dept = df_dept.select (c_dept.map (n = df_dept (n). As ("Dept_" + N)): _*)
4, then in the preservation, the program error disappears:
Emp.join (Dept,emp ("emp_depid") = = = Dept ("dept_id"), "left"). Write.mode (Savemode.append). CSV ("file:///e:\\javabd\\ Bd\\json_file\\rs ")
Here's the path to save the name: I was saved in Windows Local, because I configured the environment variables of Hadoop, so if write local need to write this, if you remove "file:///", the idea will be considered as the path of HDFs, all the error path can not find errors, If you want to write to HDFs, it's best to write the address full: hdfs://namenode_ip:9000/file
The program does not error, and then to the specified directory to see if the file is written:
File has been successfully written, over
about using Sparksql to write a program is an error and solution: Org.apache.spark.sql.AnalysisException:Duplicate column (s): "name" found, cannot save to File.