I am writing the below code to convert the union of multiple CSV files and writing the combined data into new file. But I am facing an error.
val filesData=List("file1", "file2")
val dataframes = filesData.map(spark.read.option("header", true).csv(_))
val combined = dataframes.reduce(_ union _)
val data = combined.rdd
val head :Array[String]= data.first()
val memberDataRDD = data.filter(_(0) != head(0))
type mismatch; found : org.apache.spark.sql.Row required: Array[String]