For my data (should be some GB) that looks like
data = d1 = {'1': {'1': datetime.datetime.now()}, '2': {'2': datetime.datetime.now(), '3': datetime.datetime.now()}, '3': set()}
I wrote a custom JSON encoder
class _JSONEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
return obj.isoformat()
elif isinstance(obj, set):
return {o: None for o in obj}
else:
return json.JSONEncoder.default(self, obj)
Although
s = json.dumps(data, cls=_JSONEncoder)
works fine for smaller data
, but it does not converge/end for larger dicts. What am I doing wrong? It works with orjson
though. Is the standard implementation so slow?