Listen to Liaoliang's 15th lesson tonight. The RDD creates a thorough decryption of the inside, class notes are as follows:
The first rdd in Spark driver: represents the source of the input data for the spark application. Subsequent conversion of the RDD by transformation to various operator algorithms
Ways to create an rdd:
1. Create a rdd;2 using a collection in the program, create a rdd;3 using the local file system, create an RDD 4 using HDFs, create an RDD based on DB
5, based on NoSQL, such as HBase 6, creating RDD 7 based on S3, creating RDD based on data flow
Without specifying the degree of parallelism, how many cores are used by the core, resource management is required to prevent the resource from being consumed at once.
Reduce is an action that does not produce a new RDD
The map and fiter of Spark and the simple reducebykey don't need to be shuffle, much faster than Hadoop.
Val Rdd = sc.parallelize (numbers,10) specifies the degree of parallelism 10
Direct access to Hbase,mysql requires consideration of data locality
Follow-up courses can be referred to Sina Weibo Liaoliang _dt Big Data Dream Factory: Http://weibo.com/ilovepains
Liaoliang China Spark First person, public number Dt_spark
Forward please specify the source.
Spark3000 Disciple 15th Lesson RDD Creation Insider Thorough decryption summary