I have a bit of a problem. I've tried so many different iterations of jq -r
against a large set of json files being spit out by an object detector but I just can't make it happy enough to give me a csv file. I simplified these massive files into a very short example below with the exact same data structure (note, the names of things like "book", "person", "animal" are ever changing among the scripts, so if there is a way to do this without hardcoding the keys in the command, that would be most desirous but not requiredous)
Example:
{
"/src/files/image_1.png": {
"book": 0.01711445301771164,
"person": 0.000330559065533263624,
"place": 0.9814764857292175,
"animal": 1.8662762158783153e-05,
"vehicle": 0.0010597968939691782
},
"/src/files/image_2.png": {
"book": 0.23741412162780762,
"person": 0.1587823033328247,
"place": 0.59659236669504,
"animal": 0.0036556862760335207,
"vehicle": 0.003555471543222666
}
}
Ideally, I'd like to make a csv file whose tabular format would look something like this:
File | Book | Person | Place | Animal | Vehicle |
---|---|---|---|---|---|
/src/files/image_1.png | 0.01711445301771164 | 0.000330559065533263624 | 0.9814764857292175 | 1.8662762158783153e-05 | 0.0010597968939691782 |
/src/files/image_2.png | 0.23741412162780762 | 0.1587823033328247 | 0.59659236669504 | 0.0036556862760335207 | 0.003555471543222666 |