9

I have JSON exported from Cassandra in this format.

[
  {
    "correlationId": "2232845a8556cd3219e46ab8",
    "leg": 0,
    "tag": "received",
    "offset": 263128,
    "len": 30,
    "prev": {
      "page": {
        "file": 0,
        "page": 0
      },
      "record": 0
    },
    "data": "HEAD /healthcheck HTTP/1.1\r\n\r\n"
  },
  {
    "correlationId": "2232845a8556cd3219e46ab8",
    "leg": 0,
    "tag": "sent",
    "offset": 262971,
    "len": 157,
    "prev": {
      "page": {
        "file": 10330,
        "page": 6
      },
      "record": 1271
    },
    "data": "HTTP/1.1 200 OK\r\nDate: Wed, 14 Feb 2018 12:57:06 GMT\r\nServer: \r\nConnection: close\r\nX-CorrelationID: Id-2232845a8556cd3219e46ab8 0\r\nContent-Type: text/xml\r\n\r\n"
  }]

I would like to split it to separate documents:

{ "correlationId": "2232845a8556cd3219e46ab8", "leg": 0, "tag": "received", "offset": 263128, "len": 30, "prev": { "page": { "file": 0, "page": 0 }, "record": 0 }, "data": "HEAD /healthcheck HTTP/1.1\r\n\r\n" }

and

{ "correlationId": "2232845a8556cd3219e46ab8", "leg": 0, "tag": "sent", "offset": 262971, "len": 157, "prev": { "page": { "file": 10330, "page": 6 }, "record": 1271 }, "data": "HTTP/1.1 200 OK\r\nDate: Wed, 14 Feb 2018 12:57:06 GMT\r\nServer: \r\nConnection: close\r\nX-CorrelationID: Id-2232845a8556cd3219e46ab8 0\r\nContent-Type: text/xml\r\n\r\n" }

I wanted to use jq but didn't find way how.

Can you please advise way, how to split it by the document separator?

Thanks, Reddy

peak
  • 105,803
  • 17
  • 152
  • 177
Reddy SK
  • 1,334
  • 4
  • 19
  • 27
  • 1
    Possible duplicate of [Split a JSON file into separate files](https://stackoverflow.com/questions/28744361/split-a-json-file-into-separate-files) – John Zwinck Feb 14 '18 at 16:11
  • Do you need it to work for an arbitrary number of documents, or specifically for two documents? – John Zwinck Feb 14 '18 at 16:15

6 Answers6

9

To split a json with many records into chunks of a desired size I simply use:

jq -c '.[0:1000]' mybig.json

which works like python slicing.

See the docs here: https://stedolan.github.io/jq/manual/

Array/String Slice: .[10:15]

The .[10:15] syntax can be used to return a subarray of an array or substring of a string. The array returned by .[10:15] will be of length 5, containing the elements from index 10 (inclusive) to index 15 (exclusive). Either index may be negative (in which case it counts backwards from the end of the array), or omitted (in which case it refers to the start or end of the array).

djangonaut
  • 7,233
  • 5
  • 37
  • 52
8

Using jq, one can split an array into its components using the filter:

.[]

The question then becomes what is to be done with each component. If you want to direct each component to a separate file, you could (for example) use jq with the -c option, and filter the result into awk, which can then allocate the components to different files. See e.g. Split JSON File Objects Into Multiple Files

Performance considerations

One might think that the overhead of calling jq+awk would be high compared to calling python, but both jq and awk are lightweight compared to python+json, as suggested by these timings (using Python 2.7.10):

time (jq -c  .[] input.json | awk '{print > "doc00" NR ".json";}')
user    0m0.005s
sys     0m0.008s

time python split.py
user    0m0.016s
sys     0m0.046s
peak
  • 105,803
  • 17
  • 152
  • 177
4

You can do it more efficiently using Python (because you can read the entire input once, instead of once per document):

import json

docs = json.load(open('in.json'))

for ii, doc in enumerate(docs):
    with open('doc{}.json'.format(ii), 'w') as out:
        json.dump(doc, out, indent=2)
John Zwinck
  • 239,568
  • 38
  • 324
  • 436
  • @JohnZwinck - If you mean more efficient than multiple invocations of jq, perhaps you could say so, though for N=2, my timings indicate that for the specific data provided by the OP, your python solution is overall more than five times slower than the twice-jq solution. – peak Feb 15 '18 at 07:45
  • 2
    @peak: Nobody cares about the performance of a degenerate case like N=2. Can you try it with N=10000, please? – John Zwinck Feb 15 '18 at 13:41
2

In case you have an array of 2 objects:

jq '.[0]' input.json > doc1.json && jq '.[1]' input.json > doc2.json

Results:

$ head -n100 doc[12].json
==> doc1.json <==
{
  "correlationId": "2232845a8556cd3219e46ab8",
  "leg": 0,
  "tag": "received",
  "offset": 263128,
  "len": 30,
  "prev": {
    "page": {
      "file": 0,
      "page": 0
    },
    "record": 0
  },
  "data": "HEAD /healthcheck HTTP/1.1\r\n\r\n"
}

==> doc2.json <==
{
  "correlationId": "2232845a8556cd3219e46ab8",
  "leg": 0,
  "tag": "sent",
  "offset": 262971,
  "len": 157,
  "prev": {
    "page": {
      "file": 10330,
      "page": 6
    },
    "record": 1271
  },
  "data": "HTTP/1.1 200 OK\r\nDate: Wed, 14 Feb 2018 12:57:06 GMT\r\nServer: \r\nConnection: close\r\nX-CorrelationID: Id-2232845a8556cd3219e46ab8 0\r\nContent-Type: text/xml\r\n\r\n"
}
RomanPerekhrest
  • 88,541
  • 4
  • 65
  • 105
  • 2
    You should probably use `jq '. | length' input.json` to get the number of documents, then loop that many times. – John Zwinck Feb 14 '18 at 15:53
  • @JohnZwinck, before making such statements you should have elaborated that moment with OP ; in case you're considering that moment as critical. Anyway, I don't think such behavior is reputable for 126K contributor. I'm disappointed – RomanPerekhrest Feb 14 '18 at 16:04
1

One way to do this is using jq's stream option and piping that to the split command

jq -cn --stream 'fromstream(1|truncate_stream(inputs))' bigfile.json | split -l $num_of_elements_in_a_file - big_part

The number of lines per file varies according to the value that you put into num_of_elements_in_a_file,

You can check out this answer Using jq how can I split a very large JSON file into multiple files, each a specific quantity of objects? which refers to this page for a discussion on how to use the streaming parser https://github.com/stedolan/jq/wiki/FAQ#streaming-json-parser

spotchi
  • 21
  • 3
1

Just adding another example. jq -c '.[0:10]' large_json.json > outputtosmall.json

jmariano
  • 11
  • 3