432

After creating a NumPy array, and saving it as a Django context variable, I receive the following error when loading the webpage:

array([   0,  239,  479,  717,  952, 1192, 1432, 1667], dtype=int64) is not JSON serializable

What does this mean?

Rodrigo de Azevedo
  • 1,097
  • 9
  • 17
Karnivaurus
  • 22,823
  • 57
  • 147
  • 247
  • 27
    It means that somewhere, something is trying to dump a numpy array using the `json` module. But `numpy.ndarray` is not a type that `json` knows how to handle. You'll either need to write your own serializer, or (more simply) just pass `list(your_array)` to whatever is writing the json. – mgilson Oct 30 '14 at 06:26
  • 37
    Note `list(your_array)` will not always work as it returns numpy ints, not native ints. Use `your_array.to_list()` instead. – ashishsingal Jan 04 '17 at 21:16
  • 35
    a note about @ashishsingal's comment, it should be your_array.tolist(), not to_list(). – vega Mar 17 '17 at 16:52
  • I wrote a [simple module](https://pypi.org/project/jdata/) to export complex data structures in python: `pip install jdata` then `import jdata as jd;import numpy as np; a={'str':'test','num':1.2,'np':np.arange(1,5,dtype=np.uint8)}; jd.show(a)` – FangQ Jan 27 '22 at 19:51

16 Answers16

464

I regularly "jsonify" np.arrays. Try using the ".tolist()" method on the arrays first, like this:

import numpy as np
import codecs, json 

a = np.arange(10).reshape(2,5) # a 2 by 5 array
b = a.tolist() # nested lists with same data, indices
file_path = "/path.json" ## your path variable
json.dump(b, codecs.open(file_path, 'w', encoding='utf-8'), 
          separators=(',', ':'), 
          sort_keys=True, 
          indent=4) ### this saves the array in .json format

In order to "unjsonify" the array use:

obj_text = codecs.open(file_path, 'r', encoding='utf-8').read()
b_new = json.loads(obj_text)
a_new = np.array(b_new)
David Hempy
  • 5,373
  • 2
  • 40
  • 68
travelingbones
  • 7,919
  • 6
  • 36
  • 43
  • 3
    Why can it only be stored as a list of lists? – Nikhil Prabhu Nov 07 '17 at 15:12
  • 2
    I don't know but i expect np.array types have metadata that doesn't fit into json (e.g. they specify the data type of each entry like float) – travelingbones Nov 07 '17 at 18:25
  • 2
    I tried your method, but it seems that the program stucked at `tolist()`. – Harvett Jan 31 '18 at 12:38
  • Not sure how to help you with the given info. Please try to makes sure `a` is a numpy array. Then `a.tolist()` is just the method that transforms it into a list with the same structure. – travelingbones Feb 02 '18 at 20:08
  • @yurenzhong You probably didn't upgrade your numpy? – frankliuao Dec 18 '18 at 23:37
  • 6
    @frankliuao I found the reason is that `tolist()` takes a huge amount of time when the data is large. – Harvett Jan 07 '19 at 17:26
  • @yurenzhong yes because list is supposed to convenient not to be size-efficient – frankliuao Jan 08 '19 at 18:51
  • 8
    @NikhilPrabhu JSON is Javascript Object Notation, and can therefore only represent the basic constructs from the javascript language: objects (analogous to python dicts), arrays (analogous to python lists), numbers, booleans, strings, and nulls (analogous to python Nones). Numpy arrays are not any of those things, and so cannot be serialised into JSON. Some can be converted to a JSO-like form (list of lists), which is what this answer does. – Chris L. Barnes Mar 13 '19 at 20:57
  • some should notice that while the tolist() does cast every value in the list to its closest native-python var there is a case when it will not. From the doc of item (after visiting the tolist doc): When the data type of a is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. – Avizipi Aug 21 '22 at 10:52
  • `array.tolist()` only solves the issue for one-dimensional arrays. 2D arrays are converted to lists of 1D arrays and so on, still triggering the "is not JSON serializable" error. For multidimensional arrays one has to call .tolist() recursively as described in this stackoverflow question: https://stackoverflow.com/q/39502461/10986531 – Thomas Fritz Jan 17 '23 at 17:04
416

Store as JSON a numpy.ndarray or any nested-list composition.

class NumpyEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, np.ndarray):
            return obj.tolist()
        return json.JSONEncoder.default(self, obj)

a = np.array([[1, 2, 3], [4, 5, 6]])
print(a.shape)
json_dump = json.dumps({'a': a, 'aa': [2, (2, 3, 4), a], 'bb': [2]}, 
                       cls=NumpyEncoder)
print(json_dump)

Will output:

(2, 3)
{"a": [[1, 2, 3], [4, 5, 6]], "aa": [2, [2, 3, 4], [[1, 2, 3], [4, 5, 6]]], "bb": [2]}

To restore from JSON:

json_load = json.loads(json_dump)
a_restored = np.asarray(json_load["a"])
print(a_restored)
print(a_restored.shape)

Will output:

[[1 2 3]
 [4 5 6]]
(2, 3)
David Hempy
  • 5,373
  • 2
  • 40
  • 68
karlB
  • 4,261
  • 1
  • 12
  • 5
  • 42
    This should be way higher up the board, it's the generalisable and properly abstracted way of doing this. Thanks! – thclark Jan 18 '18 at 15:12
  • 3
    Is there a simple way to get the ndarray back from the list ? – DarksteelPenguin Feb 23 '18 at 16:47
  • 8
    @DarksteelPenguin are you looking for [`numpy.asarray()`](https://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.asarray.html)? – aeolus May 04 '18 at 00:57
  • 7
    This answer is great and can easily be extended to serialize numpy float32 and np.float64 values as json too: `if isinstance(obj, np.float32) or isinstance(obj, np.float64): return float(obj)` – Bensge Jul 09 '19 at 16:07
  • 1
    This solution avoid you to cast manually every numpy array to list. – eduardosufan Mar 11 '20 at 17:22
  • +1. Why do we need the line "return json.JSONEncoder.default(self, obj)" at the end of "def default(self, obj)"? – Hans May 31 '20 at 22:01
96

I found the best solution if you have nested numpy arrays in a dictionary:

import json
import numpy as np

class NumpyEncoder(json.JSONEncoder):
    """ Special json encoder for numpy types """
    def default(self, obj):
        if isinstance(obj, np.integer):
            return int(obj)
        elif isinstance(obj, np.floating):
            return float(obj)
        elif isinstance(obj, np.ndarray):
            return obj.tolist()
        return json.JSONEncoder.default(self, obj)

dumped = json.dumps(data, cls=NumpyEncoder)

with open(path, 'w') as f:
    json.dump(dumped, f)

Thanks to this guy.

Trevor Boyd Smith
  • 18,164
  • 32
  • 127
  • 177
tsveti_iko
  • 6,834
  • 3
  • 47
  • 39
  • Thanks for the helpful answer! I wrote the attributes to a json file, but am now having trouble reading back the parameters for Logistic Regression. Is there a 'decoder' for this saved json file? – TTZ Aug 17 '18 at 15:17
  • Of course, to read the `json` back you can use this: `with open(path, 'r') as f:` `data = json.load(f)` , which returns a dictionary with your data. – tsveti_iko Aug 20 '18 at 07:06
  • That's for reading the `json` file and then to deserialize it's output you can use this: `data = json.loads(data)` – tsveti_iko Aug 20 '18 at 07:17
  • I had to add this to handle bytes datatype.. assuming all bytes are utf-8 string. elif isinstance(obj, (bytes,)): return obj.decode("utf-8") – Soichi Hayashi Apr 12 '20 at 19:31
  • +1. Why do we need the line "return json.JSONEncoder.default(self, obj)" at the end of "def default(self, obj)"? – Hans May 31 '20 at 22:03
  • @Hans for non-numpy objects, it returns default values from json encoder. – Ehsan Jul 29 '20 at 23:36
72

You can use Pandas:

import pandas as pd
pd.Series(your_array).to_json(orient='values')
John Zwinck
  • 239,568
  • 38
  • 324
  • 436
  • 15
    Great! And I think for 2D np.array it will be something like `pd.DataFrame(your_array).to_json('data.json', orient='split')`. – Jadim Aug 19 '17 at 21:38
48

Use the json.dumps default kwarg:

default should be a function that gets called for objects that can’t otherwise be serialized. ... or raise a TypeError

In the default function check if the object is from the module numpy, if so either use ndarray.tolist for a ndarray or use .item for any other numpy specific type.

import numpy as np

def default(obj):
    if type(obj).__module__ == np.__name__:
        if isinstance(obj, np.ndarray):
            return obj.tolist()
        else:
            return obj.item()
    raise TypeError('Unknown type:', type(obj))

dumped = json.dumps(data, default=default)
moshevi
  • 4,999
  • 5
  • 33
  • 50
  • What's the role of the line `type(obj).__module__ == np.__name__:` there? Would it not suffice to check for the instance? – Heberto Mayorquin May 21 '20 at 10:22
  • @RamonMartinez, to know that the object is a numpy object, this way i can use `.item` for almost any numpy object. `default` function is called for all unknown types `json.dumps` attempts to serialize. not just numpy – moshevi May 21 '20 at 15:08
  • I think this also assists https://stackoverflow.com/questions/69920913/what-is-the-cleanest-way-to-perform-nested-conversion-of-numpy-types-to-python though it would be nice to have a clean nested version too – Peter Cotton Nov 13 '21 at 17:08
7

This is not supported by default, but you can make it work quite easily! There are several things you'll want to encode if you want the exact same data back:

  • The data itself, which you can get with obj.tolist() as @travelingbones mentioned. Sometimes this may be good enough.
  • The data type. I feel this is important in quite some cases.
  • The dimension (not necessarily 2D), which could be derived from the above if you assume the input is indeed always a 'rectangular' grid.
  • The memory order (row- or column-major). This doesn't often matter, but sometimes it does (e.g. performance), so why not save everything?

Furthermore, your numpy array could part of your data structure, e.g. you have a list with some matrices inside. For that you could use a custom encoder which basically does the above.

This should be enough to implement a solution. Or you could use json-tricks which does just this (and supports various other types) (disclaimer: I made it).

pip install json-tricks

Then

data = [
    arange(0, 10, 1, dtype=int).reshape((2, 5)),
    datetime(year=2017, month=1, day=19, hour=23, minute=00, second=00),
    1 + 2j,
    Decimal(42),
    Fraction(1, 3),
    MyTestCls(s='ub', dct={'7': 7}),  # see later
    set(range(7)),
]
# Encode with metadata to preserve types when decoding
print(dumps(data))
Mark
  • 18,730
  • 7
  • 107
  • 130
4

I had a similar problem with a nested dictionary with some numpy.ndarrays in it.

def jsonify(data):
    json_data = dict()
    for key, value in data.iteritems():
        if isinstance(value, list): # for lists
            value = [ jsonify(item) if isinstance(item, dict) else item for item in value ]
        if isinstance(value, dict): # for nested lists
            value = jsonify(value)
        if isinstance(key, int): # if key is integer: > to string
            key = str(key)
        if type(value).__module__=='numpy': # if value is numpy.*: > to python list
            value = value.tolist()
        json_data[key] = value
    return json_data
JLT
  • 712
  • 9
  • 15
4

You could also use default argument for example:

def myconverter(o):
    if isinstance(o, np.float32):
        return float(o)

json.dump(data, default=myconverter)
steco
  • 1,303
  • 13
  • 16
2

Also, some very interesting information further on lists vs. arrays in Python ~> Python List vs. Array - when to use?

It could be noted that once I convert my arrays into a list before saving it in a JSON file, in my deployment right now anyways, once I read that JSON file for use later, I can continue to use it in a list form (as opposed to converting it back to an array).

AND actually looks nicer (in my opinion) on the screen as a list (comma seperated) vs. an array (not-comma seperated) this way.

Using @travelingbones's .tolist() method above, I've been using as such (catching a few errors I've found too):

SAVE DICTIONARY

def writeDict(values, name):
    writeName = DIR+name+'.json'
    with open(writeName, "w") as outfile:
        json.dump(values, outfile)

READ DICTIONARY

def readDict(name):
    readName = DIR+name+'.json'
    try:
        with open(readName, "r") as infile:
            dictValues = json.load(infile)
            return(dictValues)
    except IOError as e:
        print(e)
        return('None')
    except ValueError as e:
        print(e)
        return('None')

Hope this helps!

Community
  • 1
  • 1
ntk4
  • 1,247
  • 1
  • 13
  • 18
2

use NumpyEncoder it will process json dump successfully.without throwing - NumPy array is not JSON serializable

import numpy as np
import json
from numpyencoder import NumpyEncoder
arr = array([   0,  239,  479,  717,  952, 1192, 1432, 1667], dtype=int64) 
json.dumps(arr,cls=NumpyEncoder)
2

The other answers will not work if someone else's code (e.g. a module) is doing the json.dumps(). This happens often, for example with webservers that auto-convert their return responses to JSON, meaning we can't always change the arguments for json.dump() .
This answer solves that, and is based off a (relatively) new solution that works for any 3rd party class (not just numpy).

TLDR

pip install json_fix

import json_fix # import this anytime before the JSON.dumps gets called
import json

# create a converter
import numpy
json.fallback_table[numpy.ndarray] = lambda array: array.tolist()

# no additional arguments needed: 
json.dumps(
   dict(thing=10, nested_data=numpy.array((1,2,3)))
)
#>>> '{"thing": 10, "nested_data": [1, 2, 3]}'
Jeff Hykin
  • 1,846
  • 16
  • 25
1

Here is an implementation that work for me and removed all nans (assuming these are simple object (list or dict)):

from numpy import isnan

def remove_nans(my_obj, val=None):
    if isinstance(my_obj, list):
        for i, item in enumerate(my_obj):
            if isinstance(item, list) or isinstance(item, dict):
                my_obj[i] = remove_nans(my_obj[i], val=val)

            else:
                try:
                    if isnan(item):
                        my_obj[i] = val
                except Exception:
                    pass

    elif isinstance(my_obj, dict):
        for key, item in my_obj.iteritems():
            if isinstance(item, list) or isinstance(item, dict):
                my_obj[key] = remove_nans(my_obj[key], val=val)

            else:
                try:
                    if isnan(item):
                        my_obj[key] = val
                except Exception:
                    pass

    return my_obj
Roei Bahumi
  • 3,433
  • 2
  • 20
  • 19
1

This is a different answer, but this might help to help people who are trying to save data and then read it again.
There is hickle which is faster than pickle and easier.
I tried to save and read it in pickle dump but while reading there were lot of problems and wasted an hour and still didn't find solution though I was working on my own data to create a chat bot.

vec_x and vec_y are numpy arrays:

data=[vec_x,vec_y]
hkl.dump( data, 'new_data_file.hkl' )

Then you just read it and perform the operations:

data2 = hkl.load( 'new_data_file.hkl' )
zx485
  • 28,498
  • 28
  • 50
  • 59
KS HARSHA
  • 67
  • 2
  • 7
1

May do simple for loop with checking types:

with open("jsondontdoit.json", 'w') as fp:
    for key in bests.keys():
        if type(bests[key]) == np.ndarray:
            bests[key] = bests[key].tolist()
            continue
        for idx in bests[key]:
            if type(bests[key][idx]) == np.ndarray:
                bests[key][idx] = bests[key][idx].tolist()
    json.dump(bests, fp)
    fp.close()
Robert GRZELKA
  • 109
  • 2
  • 7
0

TypeError: array([[0.46872085, 0.67374235, 1.0218339 , 0.13210179, 0.5440686 , 0.9140083 , 0.58720225, 0.2199381 ]], dtype=float32) is not JSON serializable

The above-mentioned error was thrown when i tried to pass of list of data to model.predict() when i was expecting the response in json format.

> 1        json_file = open('model.json','r')
> 2        loaded_model_json = json_file.read()
> 3        json_file.close()
> 4        loaded_model = model_from_json(loaded_model_json)
> 5        #load weights into new model
> 6        loaded_model.load_weights("model.h5")
> 7        loaded_model.compile(optimizer='adam', loss='mean_squared_error')
> 8        X =  [[874,12450,678,0.922500,0.113569]]
> 9        d = pd.DataFrame(X)
> 10       prediction = loaded_model.predict(d)
> 11       return jsonify(prediction)

But luckily found the hint to resolve the error that was throwing The serializing of the objects is applicable only for the following conversion Mapping should be in following way object - dict array - list string - string integer - integer

If you scroll up to see the line number 10 prediction = loaded_model.predict(d) where this line of code was generating the output of type array datatype , when you try to convert array to json format its not possible

Finally i found the solution just by converting obtained output to the type list by following lines of code

prediction = loaded_model.predict(d)
listtype = prediction.tolist() return jsonify(listtype)

Bhoom! finally got the expected output, enter image description here

0

i've had the same problem but a little bit different because my values are from type float32 and so i addressed it converting them to simple float(values).