Often write code when found that the Rdd no Reducebykey method, this occurs in spark1.2 and its previous version, because the RDD itself does not exist Reducebykey method, need to be implicitly converted to
Pairrddfunctions to be accessed, so import org.apache.spark.sparkcontext._ needs to be introduced. However, after the spark1.3 version, implicit conversion is placed in the Rdd object, automatically introduced, do not introduce other packages.
Defines an additional RDD operation, such as the implicit conversion required for the Reducebykey method of Key-value-pair Rdds.
Object RDD {
The following implicit conversion was originally in Sparkcontext before spark1.3 and needed to be used after import sparkcontext._. Now that we've moved him here, the code is now automatically compiled and introduced. But the original Sparkcontext in the continuation of the reservation.
Implicit def Rddtopairrddfunctions[k, V] (rdd:rdd[(K, v)])
(Implicit kt:classtag[k], vt:classtag[v], ord:ordering[k] = null): Pairrddfunctions[k, V] = {
New Pairrddfunctions (RDD)
}
As for what is implicit conversion, the simple point is Scala rescue column, let the next-door Lao Wang to do what you can not do.
Rdd No Reducebykey method