I am trying to merge 1000's of DataFrame into a single DF that are present as Seq[org.apache.spark.sql.DataFrame] as a List. So I used something like below, x is the list of Dataframes:
val y = x.reduce(_ union _)
But its taking eternity to complete.
Any other efficient way to complete the above task? Maybe via coding or even optimizing it via Spark configuration settings?
Any help is really appreciated.