case class Record(ts: Long, id: Int, value: Int)如果是rdd,我們經常會用reducebykey擷取到最新時間戳記的一條記錄,用下面的方法def findLatest(records: RDD[Record])(implicit spark: SparkSession) = { records.keyBy(_.id).reduceByKey{ (x, y) => if(x.ts > y.ts) x else y }.values}在dataset中可以用一下方法:import org.apache.spark.sql.functions._val newDF = df.groupBy('id).agg.max(struct('ts, 'val)) as 'tmp).select($"id", $"tmp.*")為什麼可以這樣操作呢。因為對於struct,或者tuple類型而言,max方法預設按照第一個元素進行排序處理舉個詳細點的例子:import org.apache.spark.sql.functions._val data = Seq( ("michael", 1, "event 1"), ("michael", 2, "event 2"), ("reynold", 1, "event 3"), ("reynold", 3, "event 4")).toDF("user", "time", "event")val newestEventPerUser = data .groupBy('user) .agg(max(struct('time, 'event)) as 'event) .select($"user", $"event.*") // Unnest the struct into top-level columns.scala> newestEventPerUser.show()+-------+----+-------+ | user|time| event|+-------+----+-------+|reynold| 3|event 4||michael| 2|event 2|+-------+----+-------+複雜一點可參考如下:case class AggregateResultModel(id: String, mtype: String, healthScore: Int, mortality: Float, reimbursement: Float)// assume that the rawScores are loaded behorehand from json,csv filesval groupedResultSet = rawScores.as[AggregateResultModel].groupByKey( item => (item.id,item.mtype )) .reduceGroups( (x,y) => getMinHealthScore(x,y)).map(_._2)// the binary function used in the reduceGroupsdef getMinHealthScore(x : AggregateResultModel, y : AggregateResultModel): AggregateResultModel = { // complex logic for deciding between which row to keep if (x.healthScore > y.healthScore) { return y } else if (x.healthScore < y.healthScore) { return x } else { if (x.mortality < y.mortality) { return y } else if (x.mortality > y.mortality) { return x } else { if(x.reimbursement < y.reimbursement) return x else return y } } }
ref:https://stackoverflow.com/questions/41236804/spark-dataframes-reducing-by-key