If you're reading myfile with python, the splitstream
module described here may be what you want. Here is a test example (test.py) which uses jq.py.
import splitstream
from jq import jq
def slurp(filename):
with open(filename) as f:
for s in splitstream.splitfile(f, format="json"):
yield s
obj = {}
for jstr in slurp('myfile'):
obj = jq("[%s, .] | add" % obj).transform(text=jstr, text_output=True)
print obj
Here is a sample run
$ cat myfile
{"a":1}
{"b":2}
$ python test.py
{"a":1,"b":2}
Although this appears to do what you asked for using jq.py I don't think its a good solution because sharing the state between python and jq is clumsy and inefficient.
A better approach might be to use jq as a subprocess. Here is an example (test2.py):
import json
import sh
cmd = sh.jq('-M', '-s', 'add', 'myfile')
obj = json.loads( cmd.stdout )
print json.dumps(obj, indent=2)
Sample run:
$ python test2.py
{
"a": 1,
"b": 2
}