Learn about the structure and type constraints of list in Scala today.
Let's take a look at the code
Package Scala.learn
/**
* @author Zhang
*/
Abstract class Big_data
Class Hadoop extends Big_data
Class Spark extends Big_data
Object List_constructor {
def main (args:array[string]) {
Val Hadoop = new Hadoop:: Nil//Last list type for Hadoop
Val big_data = new Spark:: hadoop//The last list type is Big_data, promoted to parent type
}
}
As we said last, nil is the nothing type, it can be connected to all types of list, so Hadoop is finally a list of Hadoop types.
While new Spark is a spark type and is a subtype of Big_data, Hadoop is a type of Hadoop and a subtype of Big_data, then the final form of big_data, if it can hold two different subtypes, Then it becomes their common upper bound, that is, the parent type: Big_data.
This mechanism of list is very beneficial for the extension of the listing itself.
Share more of the Scala resources:
Baidu Cloud Disk: http://pan.baidu.com/s/1gd7133t
Micro Cloud Disk: http://share.weiyun.com/047efd6cc76d6c0cb21605cfaa88c416
360 Cloud Disk: Http://yunpan.cn/cQN9gvcKXe26M (extract code: 13CD)
Information from DT Big Data Dream Factory public account: Dt_spark
Follow the account for more information about Scala learning
81st: The structure and type constraint inversion, covariance, and the nether of the list in Scala