0

I currently have a json file that i am trying to query with sqlContext.sql() that looks something like this:

{
  "sample": {
    "persons": [
      {
        "id": "123",
      },
      {
        "id": "456",
      }
    ]
  }
}

If I just want the first value I would type:

sqlContext.sql("SELECT sample.persons[0] FROM test")

but I want all the values of "persons" without having to write a loop. Loops just consume too much processing power, and given the size of these files, that would just be impractical.

I thought I would be able to put a range in the [] brackets but I can't find any syntax by which to do that.

Community
  • 1
  • 1
user3124181
  • 782
  • 9
  • 24

1 Answers1

3

If your schema looks like this:

root
 |-- sample: struct (nullable = true)
 |    |-- persons: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- id: string (nullable = true)

and want to access individual structs from persons array all you have to do is to explode it:

from pyspark.sql.functions import explode

df.select(explode("sample.persons").alias("person")).select("person.id")

See also: Querying Spark SQL DataFrame with complex types

Community
  • 1
  • 1
zero323
  • 322,348
  • 103
  • 959
  • 935