I have 16 JSON files each of them is about 14GB in size. I've tried the following approach to read them line by line.
with open(file_name, encoding="UTF-8") as json_file:
cursor = 0
for line_number, line in enumerate(json_file):
print ("Processing line", line_number + 1,"at cursor index:", cursor)
line_as_file = io.StringIO(line)
# Use a new parser for each line
json_parser = ijson.parse(line_as_file)
for prefix, type, value in json_parser:
#print ("prefix=",prefix, "type=",type, "value=",value,ignore_index=True)
dfObj = dfObj.append({"prefix":prefix,"type":type,"value":value},ignore_index=True)
cursor += len(line)
My aim is to load them into a pandas data frame to perform some search operations.
The problem is that this approach takes a lot of time to read the file.
Is there any other optimal approach to achieve this?