Within the JSON objects I am attempting to process, I am being given a nested StructType where each key represents a specific location, which then contains a currency and price:
-- id: string (nullable = true)
-- pricingByCountry: struct (nullable = true)
|-- regionPrices: struct (nullable = true)
|-- AT: struct (nullable = true)
|-- currency: string (nullable = true)
|-- price: double (nullable = true)
|-- BT: struct (nullable = true)
|-- currency: string (nullable = true)
|-- price: double (nullable = true)
|-- CL: struct (nullable = true)
|-- currency: string (nullable = true)
|-- price: double (nullable = true)
...etc.
and I'd like to explode it so that rather than having a column per country, I can have a row for each country:
+---+--------+---------+------+
| id| country| currency| price|
+---+--------+---------+------+
| 0| AT| EUR| 100|
| 0| BT| NGU| 400|
| 0| CL| PES| 200|
+---+--------+---------+------+
These solution make sense intuitively: Spark DataFrame exploding a map with the key as a member and Spark scala - Nested StructType conversion to Map, but unfortunately don't work because I'm passing in a column and not a whole row to be mapped. I don't want to manually map the whole row--just a specific column that contains nested structs. There are several other attributes at the same level as "id" that I'd like to maintain in the structure.