2

I need to save a list (or a numpy array) as one of the entries in a JSON file. I am getting the “not JSON-serializable” error, and I can’t figure out how to fix it (and also why I am not getting it when I am passing a list to the dictionary manually).

My code:

def get_col_stats(colname, numrows=None):
    print('start reading the column')
    df = pd.read_csv('faults_all_main_dp_1_joined__9-4-15.csv', engine='c', usecols=[colname], nrows = numrows)
    print('finished reading ' + colname)

    df.columns = ['col']
    uniq = list(df.col.unique())
    count = len(uniq)
    print('unique count is', count)

    if colname == 'faultDate':
        return {'type': 'date', 'min': df.col.min(), 'max': df.col.max()}
    elif count < 100000 or colname == 'name':
        return {'type': 'factor', 'uniq': uniq}
    else:
        return {'type': 'numeric', 'min': df.col.min(), 'max': df.col.max()}

d = {}

i = 'faultCode'
d[i] = get_col_stats(i, numrows=1000)
print(d)
print(type(d['faultCode']['uniq']))

json.dumps(d)

Out:

start reading the column
finished reading faultCode
unique count is 114

{'faultCode': {'uniq': [3604, 4179, 2869, ... 57], 'type': 'factor’}}

<class 'list'>

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-84-a877aa1b2642> in <module>()
      7 print(d)
      8 
----> 9 json.dumps(d)

/home/shiny/anaconda3/lib/python3.4/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
    228         cls is None and indent is None and separators is None and
    229         default is None and not sort_keys and not kw):
--> 230         return _default_encoder.encode(obj)
    231     if cls is None:
    232         cls = JSONEncoder

/home/shiny/anaconda3/lib/python3.4/json/encoder.py in encode(self, o)
    190         # exceptions aren't as detailed.  The list call should be roughly
    191         # equivalent to the PySequence_Fast that ''.join() would do.
--> 192         chunks = self.iterencode(o, _one_shot=True)
    193         if not isinstance(chunks, (list, tuple)):
    194             chunks = list(chunks)

/home/shiny/anaconda3/lib/python3.4/json/encoder.py in iterencode(self, o, _one_shot)
    248                 self.key_separator, self.item_separator, self.sort_keys,
    249                 self.skipkeys, _one_shot)
--> 250         return _iterencode(o, 0)
    251 
    252 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,

/home/shiny/anaconda3/lib/python3.4/json/encoder.py in default(self, o)
    171 
    172         """
--> 173         raise TypeError(repr(o) + " is not JSON serializable")
    174 
    175     def encode(self, o):

TypeError: 3604 is not JSON serializable

But:

d = {}
d['model'] = {'cont': False, 'uniq': [1,2,3,4]}
json.dumps(d)

… works fine.

Anarcho-Chossid
  • 2,210
  • 4
  • 27
  • 44

1 Answers1

5

It looks like there is a bug in numpy caused by a lack of flexibility in Python's json module. There is a decent workaround in that bug report:

>>> import numpy, json
>>> def default(o):
...     if isinstance(o, numpy.integer): return int(o)
...     raise TypeError
... 
>>> json.dumps({'value': numpy.int64(42)}, default=default)
'{"value": 42}'

Essentially, json.dumps() takes a default argument:

default(obj) is a function that should return a serializable version of obj or raise TypeError. The default simply raises TypeError.

The workaround posted just passed a function to json.dumps() that converts numpy.integer values to int.

Sam
  • 20,096
  • 2
  • 45
  • 71