Adding a column to a dataframe is a common thing.
However, this information is still not much, many of them need a lot of transformation. And some of the fields may not be good to add.
However, because the columns that need to be added this time are very simple, there is no need to use the UDF function to modify the columns.
The addition of columns in the Dataframe can be achieved using the Withcolumn function. But since withcolumn the second parameter in this function, col must be one of the original columns. So the default is to select an ID first.
scala> val df = sqlcontext.range (0, 10)
Df:org.apache.spark.sql.DataFrame = [Id:bigint]
Scala> Df.show ()
+---+
| id|
+---+
| 0|
| 1|
| 2|
| 3|
| 4|
| 5|
| 6|
| 7|
| 8|
| 9|
+---+
scala> df.withcolumn ("BB", col (id) *0)
<console>:28:error:not found:value ID
Df.withcolumn ("BB", col (id) *0)
^
scala> df.withcolumn ("BB", col ("id") *0)
Res2:org.apache.spark.sql.DataFrame = [Id:bigint, Bb:bigint]
Scala> Df.show ()
+---+
| id|
+---+
| 0|
| 1|
| 2|
| 3|
| 4|
| 5|
| 6|
| 7|
| 8|
| 9|
+---+
Scala> Res2.show ()
+---+---+
| id| bb|
+---+---+
| 0| 0|
| 1| 0|
| 2| 0|
| 3| 0|
| 4| 0|
| 5| 0|
| 6| 0|
| 7| 0|
| 8| 0|
| 9| 0|
+---+---+
Scala> Res2.withcolumn ("cc", col ("id") *0)
Res5:org.apache.spark.sql.DataFrame = [Id:bigint, Bb:bigint, Cc:bigint]
Scala> Res3.show ()
<console>:30:error:value Show is a member of Unit
Res3.show ()
^
Scala> Res5.show ()
+---+---+---+
| id| bb| cc|
+---+---+---+
| 0| 0| 0|
| 1| 0| 0|
| 2| 0| 0|
| 3| 0| 0|
| 4| 0| 0|
| 5| 0| 0|
| 6| 0| 0|
| 7| 0| 0|
| 8| 0| 0|
| 9| 0| 0|
+---+---+---+
Scala>