-1

I have a partitioned Hive Table. If i want to create a spark dataframe out of this table, then how many dataframe partitions will be created?

  • Possible duplicate of [How does Spark SQL decide the number of partitions it will use when loading data from a Hive table?](https://stackoverflow.com/questions/44061443/how-does-spark-sql-decide-the-number-of-partitions-it-will-use-when-loading-data) – vikrant rana Sep 19 '19 at 05:33

1 Answers1

0

It does not depends on Hive table partitions , but depends which version of spark you are using :

For spark < 2.0

***using rdd and then creating datframe*** 
If you are creating an RDD , you can explicitly give no of partitions:
val rdd = sc.textFile("filepath" , 4)
as in above example it is 4 .


***directly creating datframe*** 
It depends on the Hadoop configuration (min / max split size)

You can use Hadoop configuration options:
mapred.min.split.size.
mapred.max.split.size

as well as HDFS block size to control partition size for filesystem based formats*.
val minSplit: Int = ???
val maxSplit: Int = ???
sc.hadoopConfiguration.setInt("mapred.min.split.size", minSplit)
sc.hadoopConfiguration.setInt("mapred.max.split.size", maxSplit)

For spark > 2.0

***using rdd and then creating datframe*** :
same as mentioned in spark <2.0


***directly creating datframe***

You can use spark.sql.files.maxPartitionBytes configuration:
spark.conf.set("spark.sql.files.maxPartitionBytes", maxSplit)

Also keep in mind:

Datasets created from RDD inherit number of partitions from its parent.