2

I would like to define a subclass of dict with, in particular, custom JSON serialization. The problem I'm running into is that if I directly subclass dict, then the json module does not enter into the 'default' function, when encountering instances of my dict subclass, there by bypassing my custom serialization. Consider:

import collections


class MyDictA(dict):
    # subclass of dict

    def to_json(self):
        return {
            "items": dict(self),
            "_type": self.__module__ + "." + self.__class__.__name__,
        }

    def __repr__(self):
        return self.__class__.__name__ + "(" + repr(dict(self)) + ")"


class MyDictB(collections.MutableMapping):
    # behaves like a dict, is not a dict
    # can easily implement the remaining dict methods

    def __getitem__(self, item):
        return self.__dict__[item]

    def __setitem__(self, key, value):
        self.__dict__[key] = value

    def to_json(self):
        return {
            "items": vars(self),
            "_type": self.__module__ + "." + self.__class__.__name__,
        }

    def __repr__(self):
        return self.__class__.__name__ + "(" + repr(self.__dict__) + ")"

    def __iter__(self):
        return self.__dict__.__iter__()

    def __len__(self):
        return len(self.__dict__)

    def __delitem__(self, key):
        del self.__dict__[key]

I can easily implement the remaining dict methods, so that MyDictB is a drop-in replacement for dict, but this somehow feels non-pythonic.

Now we implement the custom serialization:

import json

def my_default(obj):
    if hasattr(obj, "to_json"):
        return obj.to_json()
    else:
        return obj

Example:

A = MyDictA()
A["foo"] = "bar"

B = MyDictB()
B["foo"] = "bar"

Result:

>>> print(A)
MyDictA({'foo': 'bar'})
>>> print(B)
MyDictB({'foo': 'bar'})
>>> print(jsonA)
{"foo": "bar"}
>>> print(jsonB)
{"_type": "__main__.MyDictB", "items": {"foo": "bar"}}

As you can see, only MyDictB passes through the custom serialization, 'my_default'; instances of MyDictA never do, since they are dict instances.

the problem in the json module is that it conditions on isinstance(obj, dict), see the implementation of "_iterencode" in json/encoder.py.

Note:

>>> isinstance(A, collections.Mapping)
True
>>> isinstance(B, collections.Mapping)
True
>>> isinstance(A, dict)
True
>>> isinstance(B, dict)
False

Is there a better way to get the json module to respect my subclassing of dict?

nick maxwell
  • 1,441
  • 3
  • 16
  • 22

1 Answers1

0

Partial solution:

jsonA_1 = json.dumps(A, default=my_default)
jsonB_1 = json.dumps(B, default=my_default)

def my_isinstance(obj, A_tuple):
    if isinstance(obj, MyDictA):
        if A_tuple==dict:
            return False
        if isinstance(A_tuple, collections.Iterable):
            return any(my_isinstance(obj, A) for A in A_tuple)
    return isinstance(obj, A_tuple)

# override isinstance default in _make_iterencode
_make_iterencode_defaults = list(json.encoder._make_iterencode.__defaults__)
_make_iterencode_defaults[5] = my_isinstance
json.encoder._make_iterencode.__defaults__ = tuple(_make_iterencode_defaults)
# turn off c_make_encoder
json.encoder.c_make_encoder = None

assert isinstance(A, dict) is True
assert isinstance(B, dict) is False
assert my_isinstance(A, dict) is False
assert my_isinstance(B, dict) is False
assert my_isinstance(A, (dict, MyDictB)) is False
assert my_isinstance(A, (dict, MyDictA)) is True

# let's try that again:
jsonA_2 = json.dumps(A, default=my_default)
jsonB_2 = json.dumps(B, default=my_default)

result:

>>> jsonA_1
'{"foo": "bar"}'
>>> jsonB_1
'{"items": {"foo": "bar"}, "_type": "__main__.MyDictB"}'
>>> jsonA_2
'{"items": {"foo": "bar"}, "_type": "__main__.MyDictA"}'
>>> jsonB_2
'{"items": {"foo": "bar"}, "_type": "__main__.MyDictB"}'

So that seems to work, except that it requires disabling c_make_encoder, which presumably is a faster implementation.

Edit:

Similar solutions in How to change json encoding behaviour for serializable python object?

nick maxwell
  • 1,441
  • 3
  • 16
  • 22