9

I am trying to convert my pyspark sql dataframe to json and then save as a file.

df_final = df_final.union(join_df)

df_final contains the value as such:

enter image description here

I tried something like this. But it created a invalid json.

df_final.coalesce(1).write.format('json').save(data_output_file+"createjson.json", overwrite=True)

{"Variable":"Col1","Min":"20","Max":"30"}
{"Variable":"Col2","Min":"25,"Max":"40"}

My expected file should have data as below:

[
{"Variable":"Col1",
"Min":"20",
"Max":"30"},
{"Variable":"Col2",
"Min":"25,
"Max":"40"}]
ZygD
  • 22,092
  • 39
  • 79
  • 102
Shankar Panda
  • 736
  • 3
  • 11
  • 27

4 Answers4

7

For pyspark you can directly store your dataframe into json file, there is no need to convert the datafram into json.

df_final.coalesce(1).write.format('json').save('/path/file_name.json')

and still you want to convert your datafram into json then you can used df_final.toJSON().

Sahil Desai
  • 3,418
  • 4
  • 20
  • 41
  • 3
    Yeah, but it stores data line by line {"Variable":"Col1","Min":"20","Max":"30"} {"Variable":"Col2","Min":"25,"Max":"40"} instead it should be separated by , and enclosed with square braces – Shankar Panda Nov 23 '18 at 07:48
3

Here is how you can do the equivalent of json.dump for a dataframe with PySpark 1.3+.

df_list_of_jsons = df.toJSON().collect()
df_list_of_dicts = [json.loads(x) for x in df_list_of_jsons]
df_json = json.dumps(df_list_of_dicts)
sc.parallelize([df_json]).repartition(1).cache().saveAsTextFile("<HDFS_PATH>")

Note this will result in the whole dataframe being loaded into the driver memory, so this is only recommended for small dataframe.

utkarshgupta137
  • 139
  • 3
  • 4
2

A solution can be using collect and then using json.dump:

import json
collected_df = df_final.collect()
with open(data_output_file + 'createjson.json', 'w') as outfile:
    json.dump(data, outfile)
OmG
  • 18,337
  • 10
  • 57
  • 90
1

If you want to use spark to process result as json files, I think that your output schema is right in hdfs.

And I assumed you encountered the issue that you can not smoothly read data from normal python script by using :

with open('data.json') as f:
  data = json.load(f)

You should try to read data line by line:

data = []
with open("data.json",'r') as datafile:
  for line in datafile:
    data.append(json.loads(line))

and you can use pandas to create dataframe :

df = pd.DataFrame(data) 
chilun
  • 292
  • 6
  • 19
  • I was trying to understand why there was an answer that was related to reading the json file rather than writing out to it. I understand now, the json format that spark writes out is not comma delimited, and so it must be read back in a little differently. Thank you so much for this – Fahad Ashraf Feb 19 '21 at 04:47
  • 1
    @FahadAshraf Glad that helped. And yes, the json format that spark writes out is not comma delimited. It's very confuse when reading json file which created from spark (or others hdfs schema) at first time. – chilun Feb 22 '21 at 03:42