74

I am working with CSV files where several of the columns have a simple json object (several key value pairs) while other columns are normal. Here is an example:

name,dob,stats
john smith,1/1/1980,"{""eye_color"": ""brown"", ""height"": 160, ""weight"": 76}"
dave jones,2/2/1981,"{""eye_color"": ""blue"", ""height"": 170, ""weight"": 85}"
bob roberts,3/3/1982,"{""eye_color"": ""green"", ""height"": 180, ""weight"": 94}"

After using df = pandas.read_csv('file.csv'), what's the most efficient way to parse and split the stats column into additional columns?

After about an hour, the only thing I could come up with was:

import json
stdf = df['stats'].apply(json.loads)
stlst = list(stdf)
stjson = json.dumps(stlst)
df.join(pandas.read_json(stjson))

This seems like I'm doing it wrong, and it's quite a bit of work considering I'll need to do this on three columns regularly.

Desired output is the dataframe object below. Added following lines of code to get there in my (crappy) way:

df = df.join(pandas.read_json(stjson))
del(df['stats'])
In [14]: df

Out[14]:
          name       dob eye_color  height  weight
0   john smith  1/1/1980     brown     160      76
1   dave jones  2/2/1981      blue     170      85
2  bob roberts  3/3/1982     green     180      94
Shaido
  • 27,497
  • 23
  • 70
  • 73
profesor_tortuga
  • 1,826
  • 2
  • 17
  • 27

6 Answers6

65

I think applying the json.load is a good idea, but from there you can simply directly convert it to dataframe columns instead of writing/loading it again:

stdf = df['stats'].apply(json.loads)
pd.DataFrame(stdf.tolist()) # or stdf.apply(pd.Series)

or alternatively in one step:

df.join(df['stats'].apply(json.loads).apply(pd.Series))
joris
  • 133,120
  • 36
  • 247
  • 202
  • 1
    ty, this was perfectly sufficient for my current task but i marked the other one as the answer since it's more broadly applicable – profesor_tortuga Dec 19 '13 at 15:39
  • I was wondering how to parallelise this statement df.join(df['stats'].apply(json.loads).apply(pd.Series)). Any help please? – Neeraj Kumar Jul 21 '18 at 12:18
55

There is a slightly easier way, but ultimately you'll have to call json.loads There is a notion of a converter in pandas.read_csv

converters : dict. optional

Dict of functions for converting values in certain columns. Keys can either be integers or column labels

So first define your custom parser. In this case the below should work:

def CustomParser(data):
    import json
    j1 = json.loads(data)
    return j1

In your case you'll have something like:

df = pandas.read_csv(f1, converters={'stats':CustomParser},header=0)

We are telling read_csv to read the data in the standard way, but for the stats column use our custom parsers. This will make the stats column a dict

From here, we can use a little hack to directly append these columns in one step with the appropriate column names. This will only work for regular data (the json object needs to have 3 values or at least missing values need to be handled in our CustomParser)

df[sorted(df['stats'][0].keys())] = df['stats'].apply(pandas.Series)

On the Left Hand Side, we get the new column names from the keys of the element of the stats column. Each element in the stats column is a dictionary. So we are doing a bulk assign. On the Right Hand Side, we break up the 'stats' column using apply to make a data frame out of each key/value pair.

Paul
  • 7,155
  • 8
  • 41
  • 40
  • 1
    thanks, this is great, i expect i'll need to deal with more mutant data in the future and this will help. – profesor_tortuga Dec 19 '13 at 15:40
  • 1
    The last line in this answer does not guarantee that the dict elements get matched to the correct column names. `.apply(pandas.Series)` converts each row into a Series and automatically sorts the index, which in this case is the list of dictionary keys. So for consistency, you have to ensure that the list of keys on the LHS is sorted. – abeboparebop May 26 '17 at 13:03
  • 15
    I would `import json` and then use: `pandas.read_csv(f1, converters={'stats': json.loads})`. You don't need to define a new function, and you definitely don't need to import inside it. – gberger Oct 18 '17 at 13:38
  • 2
    Hello. I tried this in Python 3 and got the error: ValueError: Columns must be same length as key. My requirement and expected output is exactly the same except that I have nested values in my JSON. – A.Ali Jun 22 '18 at 14:02
  • only issue is when the json keys are inconsistent, Columns must be same length as key error pops – Francis Manoj Fernnado Sep 29 '18 at 08:23
15

Option 1

If you dumped the column with json.dumps before you wrote it to csv, you can read it back in with:

import json
import pandas as pd

df = pd.read_csv('data/file.csv', converters={'json_column_name': json.loads})

Option 2

If you didn't then you might need to use this:

import json
import pandas as pd

df = pd.read_csv('data/file.csv', converters={'json_column_name': eval})

Option 3

For more complicated situations you can write a custom converter like this:

import json
import pandas as pd

def parse_column(data):
    try:
        return json.loads(data)
    except Exception as e:
        print(e)
        return None


df = pd.read_csv('data/file.csv', converters={'json_column_name': parse_column})
Glen Thompson
  • 9,071
  • 4
  • 54
  • 50
  • Hello, I have got nan value in my JSON sting 'sv': [nan, nan, nan, nan, nan, 1.0] and I got the error "name 'nan' is not defined". Do you know how to handle that case? – Garf Jan 28 '20 at 15:45
  • Hmm you could try Option 3, the custom parser and do something like data = data.replace('nan,', 'None,') and then return eval(data), be careful though with the replacement, and other values that you don't want to replace being replaced. I'm not sure what your data looks like. You could maybe get a bit smarter and use regex something like this `(?<=[\[,\s\]])(nan)(?=[\,\s\]])` which should match all the `nan` but not stuff like `bnan` or `*nan` - This is a good tool to play around on https://regexr.com/ – Glen Thompson Jan 28 '20 at 16:28
8

Paul's original answer was very nice but not correct in general, because there is no assurance that the ordering of columns is the same on the left-hand side and the right-hand side of the last line. (In fact, it does not seem to work on the test data in the question, instead erroneously switching the height and weight columns.)

We can fix this by ensuring that the list of dict keys on the LHS is sorted. This works because the apply on the RHS automatically sorts by the index, which in this case is the list of column names.

def CustomParser(data):
  import json
  j1 = json.loads(data)
  return j1

df = pandas.read_csv(f1, converters={'stats':CustomParser},header=0)
df[sorted(df['stats'][0].keys())] = df['stats'].apply(pandas.Series)
abeboparebop
  • 7,396
  • 6
  • 37
  • 46
  • 4
    Thx for spotting that. I have updated my answer with your additional sorted for completeness – Paul Mar 20 '18 at 12:22
3

json_normalize function in pandas.io.json package helps to do this without using custom function.

(assuming you are loading the data from a file)

from pandas.io.json import json_normalize
df = pd.read_csv(file_path, header=None)
stats_df = json_normalize(data['stats'].apply(ujson.loads).tolist())
stats_df.set_index(df.index, inplace=True)
df.join(stats_df)
del df.drop(df.columns[2], inplace=True)
1
  • If you have DateTime values in your .csv file, df[sorted(df['stats'][0].keys())] = df['stats'].apply(pandas.Series) will mess up the date time values
  • This link has some tip how to read the csv file with json strings into the dataframe.

You could do the following to read csv file with json string column and convert your json string into columns.

  1. Read your csv into the dataframe (read_df)

    read_df = pd.read_csv('yourFile.csv', converters={'state':json.loads}, header=0, quotechar="'")

  2. Convert the json string column to a new dataframe

    state_df = read_df['state'].apply(pd.Series)

  3. Merge the 2 dataframe with index number.

    df = pd.merge(read_df, state_df, left_index=True, right_index=True)

Teana
  • 11
  • 1