From Https://databricks.gitbooks.io/databricks-spark-knowledge-base/content/performance_optimization/how_many_ Partitions_does_an_rdd_have.html
For tuning and troubleshooting, it's often necessary to know what many paritions an RDD represents. There is a few ways to find this information:
View Task execution against partitions Using the UI
When a stage is executes, you can see the number of partitions for a given stage in the Spark UI. For example, the following simple job creates a RDD of elements across 4 partitions, then distributes a dummy map TAS K before collecting the elements back to the driver program:
Scala> val somerdd =Sc.parallelize (1To -,4) SomeRDD:org.apache.spark.rdd.RDD[Int] =Parallelcollectionrdd[0] at parallelize at <console>: AScala> Somerdd.map (x = x). Collectres1:Array[Int] =Array(1,2,3,4,5,6,7,8,9,Ten, One, A, -, -, the, -, -, -, +, -, +, A, at, -, -, -, -, -, in, -, to, +, -, the, *, $,Panax Notoginseng, -, the, +, A, the, +, -, $, $, -, -, the, -,Wuyi, the, -, Wu, -, About, $, -, -, -, A, +, the, -, $, the, the, the, the, -, in, the, the, About, the, the, the, +, -, the,Bayi, the, the, -, -, the, the, the, the, -, the, the, the,94, the, the, the,98, About, -)
In Spark's application UI, you can see from the following screenshot that the "Total Tasks" represents the number of parti tions:
View Partition Caching Using the UI
When persisting (a.k.a. Caching) RDDs, it's useful to understand how many partitions has been stored. The example below is identical to the one prior, and except that we'll now caches the RDD prior to processing it. After this completes, we can use the UI to understand what is been stored from this operation.
Scala> Somerdd.setname ("Toy"). Cacheres2:somerdd.Type = Toyparallelcollectionrdd[0] at parallelize at <console>:12scala> Somerdd.map (x = x). Collectres3:array[INT] =Array (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21st22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,50M41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100)
Note from the screenshot that there is four partitions cached.
Inspect RDD Partitions programatically
In the Scala API, an RDD holds a reference to it's Array of partitions, which you can use to find out how many partitions There is:
val someRDD = sc.parallelize(1 to 100, 30)someRDD: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:12scala> someRDD.partitions.sizeres0: Int = 30
In the Python API, there are a method for explicitly listing the number of partitions:
In [1]: someRDD = sc.parallelize(range(101),30)In [2]: someRDD.getNumPartitions()Out[2]: 30
Note in the examples above, the number of partitions is intentionally set to upon initialization.
Reproduced How many partitions Does an RDD has