0

I have 2 dataframes:

|data          |
|--------------|
|[1,Rob,12]    |
|[2,Jeremy,11] |
|[3,Bart,14]   |


scala> data.printSchema()

root
 |-- data: array (nullable = true)
 |    |-- element: string (containsNull = true)

and

|headers         |
|----------------|
|[id,name,deptid]|


scala> headers.printSchema()

root
|-- headers: array (nullable = true)
|    |-- element: string (containsNull = true)

Question: How do I create an output data-frame with the following format using the headers DF and the data DF?

| id | name  | deptid|
|----| ------|-------|
| 1  | Rob   | 12    |
| 2  | Jeremy| 11    |
| 3  | Bart  | 14    |
partsBar
  • 3
  • 3

2 Answers2

0

You can create the list of the column names from the headers DataFrame and use select to map the data array elements to the corresponding headers array elements:

import org.apache.spark.sql.functions._

val dataDF = Seq(
  Seq("1", "Rob", "12"),
  Seq("2", "Jeremy", "11"),
  Seq("3", "Bart", "14")
).toDF("data")

val headersDF = Seq(
  Seq("id", "name", "deptid")
).toDF("headers")

val cols = headersDF.first.getSeq[String](0)
// cols: Seq[String] = WrappedArray(id, name, deptid)

val resultDF = dataDF.
  select( (0 until cols.size).map( i => $"data"(i).as(cols(i)) ): _* )

resultDF.show
// +---+------+------+
// | id|  name|deptid|
// +---+------+------+
// |  1|   Rob|    12|
// |  2|Jeremy|    11|
// |  3|  Bart|    14|
// +---+------+------+
Leo C
  • 22,006
  • 3
  • 26
  • 39
-1

You can check following thread to get columns from array:

How to explode an array into multiple columns in Spark

you can rename column using withColumnRenamed function.

Ramdev Sharma
  • 974
  • 1
  • 12
  • 17