Today I learned the definition of Scala, let's take a look at the following code
Class Pair[t] (Val first:t,val second:t)
Class Pair[t <: Comparable[t]] (val first:t,val second:t) {
def bigger = if (First.compareto (second) > 0) First Else second
}
Class Pair_lower_bound[t] (val first:t,val second:t) {
def Replacefirst[r;: T] (newfirst:r) = new Pair_lower_bound[r] (Newfirst,second)
}
Object Typy_variable_bounds {
def main (args:array[string]) {
Val pair = new pair ("Spark", "Hadoop")
println (Pair.bigger)
}
}
First look at the definition of the pair class. Its type is [T <: Comparable[t]], which means that if the type of T is a pair of images, the Compare method can be compared, the Compare method is defined in both types, then its type is the comparable type, that is, comparable[t]. and <: The meaning is that T is a subclass of Comparable[t]. In this case, called Comparable[t] is the upper bound of T.
By definition we know that T can be compared, then the pair defined in the main function can use the bigger method to print out spark
Back to the type definition, look at the definition of the Replacefirst method in the Pair_lower_bound class, where the upper bounds are cleverly used, that is, R is the upper bound of T, then when using R to replace first, the new pair image is the R type, because T is the subtype of R, is the lower bound of R.
Share more of the Scala resources:
Baidu Cloud Disk: http://pan.baidu.com/s/1gd7133t
Micro Cloud Disk: http://share.weiyun.com/047efd6cc76d6c0cb21605cfaa88c416
360 Cloud Disk: Http://yunpan.cn/cQN9gvcKXe26M (extract code: 13CD)
Information from DT Big Data Dream Factory public account: Dt_spark
Follow the account for more information about Scala learning
43rd: Scala type variable bounds code combat and its application in Spark source parsing