I found the solution for json to csv conversion. Below is the sample json and solution.
{
"took" : 111,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "alerts",
"_type" : "_doc",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"alertID" : "639387c3-0fbe-4c2b-9387-c30fbe7c2bc6",
"alertCategory" : "Server Alert",
"description" : "Successfully started.",
"logId" : null
}
},
{
"_index" : "alerts",
"_type" : "_doc",
"_id" : "2",
"_score" : 1.0,
"_source" : {
"alertID" : "2",
"alertCategory" : "Server Alert",
"description" : "Successfully stoped.",
"logId" : null
}
}
]
}
}
The solution :
jq -r '.hits.hits[]._source | [ "alertID" , "alertCategory" , "description", "logId" ], ([."alertID",."alertCategory",."description",."logId" // "null"]) | @csv' < /root/events.json
The problem with this solution is that I have to hard code the column names. What If my json gets a few additions under _source
tag later? I need a solution which can handle the dynamic data under _source
. I am open to any other tool or command in shell.