0

I have twitter datasets test_data which records more than two records. So, I reformat my json to contain an array and then after trying to read the datasets. So far, I have written the following codes:

import json

json_filename = 'test_data.json'
    
with open(json_filename, encoding="utf8") as file:
    array = {'foo': []}
    foo_list = array['foo']
    for line in file:
        obj = json.loads(line)
        foo_list.append(obj)
    
print(json.dumps(array, indent=4))

This gives me the following warning

'IOPub data rate exceeded. The notebook server will temporarily stop sending output to the client in order to avoid crashing it. To change this limit, set the config variable --NotebookApp.iopub_data_rate_limit.'

Any idea how to get data in Jupyter if I have such huge datasets and how to show those datasets in jupyter.

Thanks in advance.

Paul P
  • 3,346
  • 2
  • 12
  • 26
Ravi
  • 151
  • 8
  • You shouldn't read your file line by line, since in JSON objects are defined on several lines most of the time. Json.load can read from files directly – Tranbi Aug 20 '21 at 11:16
  • actually, `json.loads` can only read from a string, but `json.load` can take input from a file. [related question](https://stackoverflow.com/questions/39719689/what-is-the-difference-between-json-load-and-json-loads-functions) – ComteHerappait Aug 20 '21 at 11:19

0 Answers0