1

I've never used jq to generate JSON, only parse. So this is untraveled territory for me.

I found jq & bash: make JSON array from variable, which gets me closer to what I'm seeking. However, I've yet to determine how to dynamically create key names for the structure I'm seeking.

The structure I'm seeking looks something like this:

{
  "eth0":
        {
            "key1": "value1",
            "key2": "value2",
            "key3": "value3"
        },
  "eth3":
        {
            "key1": "value1",
            "key2": "value2",
            "key3": "value3"
        }
}

derived from csv:

iface,key1,key2,key3
eth0,value1,value2,value3
eth3,value1,value2,value3

The problem I've been having is the dynamic generation of the keys in the JSON from the CSV. I haven't been able to find jq's ability to do that. I'm using jq 1.5.

Am I spinning my wheels on this?

EDIT - possible answer

Currently investigating this cookbook answer:

https://github.com/stedolan/jq/wiki/Cookbook#convert-a-csv-file-with-headers-to-json

Community
  • 1
  • 1
Jim
  • 1,499
  • 1
  • 24
  • 43

2 Answers2

1

Ideally, your input should be JSON so you should run your file through something that could convert your CSV file to arrays containing your data so it can be consumed by jq. Assuming your data won't be complex and the values themselves won't contain commas, you could read in the raw lines and split them. Then it's just a matter of building out your result.

$ jq -R 'split(",") as $k |
    reduce (inputs | split(",")) as $r ({};
        .[$r[0]] = ([range(1;$k|length) | { key: $k[.], value: $r[.] }] | from_entries)
    )' input.csv
Jeff Mercado
  • 129,526
  • 32
  • 251
  • 272
  • Given that my data is really simple, I've marked this as the answer. It works perfectly, though I needed to alter my csv a little, but that was on me. Really appreciate this answer. The other possible answer (in my edit) is in the right direction as well, however, I am rolling with this one as it's more succinct and works for my use case. – Jim Feb 13 '16 at 00:43
1

Here is a straightforward solution that is actually very similar to Jeff's (in particular, it makes the same assumptions about the CSV), but it uses input for the first line, inputs for the remaining lines, a simplified version of "objectify" from the jq Cookbook, and add instead of reduce in the main filter:

jq -R -n '
  def objectify(headers): . as $in
    | reduce range(0; headers|length) as $i
        ({}; .[headers[$i]] = $in[$i]  );

  ((input|split(","))[1:]) as $headers
  | [ (inputs|split(",")) as $line
      | { ($line[0]): ($line[1:] | objectify($headers)) } ]
  | add
'
peak
  • 105,803
  • 17
  • 152
  • 177