hi I have data of around rows more then 500000 and sample of data is something like this
[
{
"FIELD_1":33190,
"FIELD_2":33189,
"FIELD_3":"test",
"FIELD_4":"COL1",
"FIELD_5":"4001",
"FIELD_6":"TEST"
},
{
"FIELD_1":33191,
"FIELD_2":33189,
"FIELD_3":"test2",
"FIELD_4":"COL3",
"FIELD_5":"4002",
"FIELD_6":"TEST"
}
]
I want to directly save this data to s3 efficiently. I checked many example like this Save Dataframe to csv directly to s3 Python but in this they are using dataframe . do I need to do this and is this good way ?