9

Suppose I have 4 small DataFrames

df1, df2, df3 and df4

import pandas as pd
from functools import reduce
import numpy as np

df1 = pd.DataFrame([['a', 1, 10], ['a', 2, 20], ['b', 1, 4], ['c', 1, 2], ['e', 2, 10]])
df2 = pd.DataFrame([['a', 1, 15], ['a', 2, 20], ['c', 1, 2]])
df3 = pd.DataFrame([['d', 1, 10], ['e', 2, 20], ['f', 1, 1]])  
df4 = pd.DataFrame([['d', 1, 10], ['e', 2, 20], ['f', 1, 15]])   


df1.columns = ['name', 'id', 'price']
df2.columns = ['name', 'id', 'price']
df3.columns = ['name', 'id', 'price']    
df4.columns = ['name', 'id', 'price']   

df1 = df1.rename(columns={'price':'pricepart1'})
df2 = df2.rename(columns={'price':'pricepart2'})
df3 = df3.rename(columns={'price':'pricepart3'})
df4 = df4.rename(columns={'price':'pricepart4'})

Create above are the 4 DataFrames, what I would like is in the code below.

# Merge dataframes
df = pd.merge(df1, df2, left_on=['name', 'id'], right_on=['name', 'id'], how='outer')
df = pd.merge(df , df3, left_on=['name', 'id'], right_on=['name', 'id'], how='outer')
df = pd.merge(df , df4, left_on=['name', 'id'], right_on=['name', 'id'], how='outer')

# Fill na values with 'missing'
df = df.fillna('missing')

So I have achieved this for 4 DataFrames that don't have many rows and columns.

Basically, I want to extend the above outer merge solution to MULTIPLE (48) DataFrames of size 62245 X 3:

So I came up with this solution by building from another StackOverflow answer that used a lambda reduce:

from functools import reduce
import pandas as pd
import numpy as np
dfList = []

#To create the 48 DataFrames of size 62245 X 3
for i in range(0, 49):

    dfList.append(pd.DataFrame(np.random.randint(0,100,size=(62245, 3)), columns=['name',  'id',  'pricepart' + str(i + 1)]))


#The solution I came up with to extend the solution to more than 3 DataFrames
df_merged = reduce(lambda  left, right: pd.merge(left, right, left_on=['name', 'id'], right_on=['name', 'id'], how='outer'), dfList).fillna('missing')

This is causing a MemoryError.

I do not know what to do to stop the kernel from dying.. I've been stuck on this for two days.. Some code for the EXACT merge operation that I have performed that does not cause the MemoryError or something that gives you the same result, would be really appreciated.

Also, the 3 columns in the main DataFrame (NOT the reproducible 48 DataFrames in the example) are of type int64, int64 and float64 and I'd prefer them to stay that way because of the integer and float that it represents.

EDIT:

Instead of iteratively trying to run the merge operations or using the reduce lambda functions, I have done it in groups of 2! Also, I've changed the datatype of some columns, some did not need to be float64. So I brought it down to float16. It gets very far but still ends up throwing a MemoryError.

intermediatedfList = dfList    

tempdfList = []    

#Until I merge all the 48 frames two at a time, till it becomes size 2
while(len(intermediatedfList) != 2):

    #If there are even number of DataFrames
    if len(intermediatedfList)%2 == 0:

        #Go in steps of two
        for i in range(0, len(intermediatedfList), 2):

            #Merge DataFrame in index i, i + 1
            df1 = pd.merge(intermediatedfList[i], intermediatedfList[i + 1], left_on=['name',  'id'], right_on=['name',  'id'], how='outer')
            print(df1.info(memory_usage='deep'))

            #Append it to this list
            tempdfList.append(df1)

        #After DataFrames in intermediatedfList merging it two at a time using an auxillary list tempdfList, 
        #Set intermediatedfList to be equal to tempdfList, so it can continue the while loop. 
        intermediatedfList = tempdfList 

    else:

        #If there are odd number of DataFrames, keep the first DataFrame out

        tempdfList = [intermediatedfList[0]]

        #Go in steps of two starting from 1 instead of 0
        for i in range(1, len(intermediatedfList), 2):

            #Merge DataFrame in index i, i + 1
            df1 = pd.merge(intermediatedfList[i], intermediatedfList[i + 1], left_on=['name',  'id'], right_on=['name',  'id'], how='outer')
            print(df1.info(memory_usage='deep'))
            tempdfList.append(df1)

        #After DataFrames in intermediatedfList merging it two at a time using an auxillary list tempdfList, 
        #Set intermediatedfList to be equal to tempdfList, so it can continue the while loop. 
        intermediatedfList = tempdfList 

Is there any way I can optimize my code to avoid MemoryError, I've even used AWS 192GB RAM (I now owe them 7$ which I could've given one of yall), that gets farther than what I've gotten, and it still throws MemoryError after reducing a list of 28 DataFrames to 4..

cs95
  • 379,657
  • 97
  • 704
  • 746
ababuji
  • 1,683
  • 2
  • 14
  • 39
  • @coldspeed if I'm not wrong the concat in your answer should be doing the outer merge correctly, as indeed the output of your answer shows - in the example by Abhishek, you get the same answer as he does, and not the empty DataFrame you'd get with 'inner'. – Marco Spinaci Jun 20 '18 at 11:16
  • @MarcoSpinaci is that so? ...huh, interesting, thanks for the clarification! – cs95 Jun 20 '18 at 13:18

4 Answers4

15

You may get some benefit from performing index-aligned concatenation using pd.concat. This should hopefully be faster and more memory efficient than an outer merge as well.

df_list = [df1, df2, ...]
for df in df_list:
    df.set_index(['name', 'id'], inplace=True)

df = pd.concat(df_list, axis=1) # join='inner'
df.reset_index(inplace=True)

Alternatively, you can replace the concat (second step) by an iterative join:

from functools import reduce
df = reduce(lambda x, y: x.join(y), df_list)

This may or may not be better than the merge.

cs95
  • 379,657
  • 97
  • 704
  • 746
  • concat is definitely way faster than append or merge. – MrE May 01 '19 at 19:57
  • ANy idea how can i overcome this https://stackoverflow.com/questions/58077949/dataframe-merge-gives-process-finished-with-exit-code-137-interrupted-by-signa? – Ratha Sep 25 '19 at 01:19
5

Seems like part of what dask dataframes were designed to do (out of memory ops with dataframes). See Best way to join two large datasets in Pandas for example code. Sorry not copying and pasting but don't want to seem like I am trying to take credit from answerer in linked entry.

user85779
  • 334
  • 2
  • 11
2

You can try a simple for loop. The only memory optimization I have applied is downcasting to most optimal int type via pd.to_numeric.

I am also using a dictionary to store dataframes. This is good practice for holding a variable number of variables.

import pandas as pd

dfs = {}
dfs[1] = pd.DataFrame([['a', 1, 10], ['a', 2, 20], ['b', 1, 4], ['c', 1, 2], ['e', 2, 10]])
dfs[2] = pd.DataFrame([['a', 1, 15], ['a', 2, 20], ['c', 1, 2]])
dfs[3] = pd.DataFrame([['d', 1, 10], ['e', 2, 20], ['f', 1, 1]])  
dfs[4] = pd.DataFrame([['d', 1, 10], ['e', 2, 20], ['f', 1, 15]])   

df = dfs[1].copy()

for i in range(2, max(dfs)+1):
    df = pd.merge(df, dfs[i].rename(columns={2: i+1}),
                  left_on=[0, 1], right_on=[0, 1], how='outer').fillna(-1)
    df.iloc[:, 2:] = df.iloc[:, 2:].apply(pd.to_numeric, downcast='integer')

print(df)

   0  1   2   3   4   5
0  a  1  10  15  -1  -1
1  a  2  20  20  -1  -1
2  b  1   4  -1  -1  -1
3  c  1   2   2  -1  -1
4  e  2  10  -1  20  20
5  d  1  -1  -1  10  10
6  f  1  -1  -1   1  15

You should not, as a rule, combine strings such as "missing" with numeric types, as this will turn your entire series into object type series. Here we use -1, but you may wish to use NaN with float dtype instead.

jpp
  • 159,742
  • 34
  • 281
  • 339
0

So, you have 48 dfs with 3 columns each - name, id, and different column for every df.

You don`t must to use merge....

Instead, if you concat all the dfs

df = pd.concat([df1,df2,df3,df4])

You will recieve:

Out[3]: 
   id name  pricepart1  pricepart2  pricepart3  pricepart4
0   1    a        10.0         NaN         NaN         NaN
1   2    a        20.0         NaN         NaN         NaN
2   1    b         4.0         NaN         NaN         NaN
3   1    c         2.0         NaN         NaN         NaN
4   2    e        10.0         NaN         NaN         NaN
0   1    a         NaN        15.0         NaN         NaN
1   2    a         NaN        20.0         NaN         NaN
2   1    c         NaN         2.0         NaN         NaN
0   1    d         NaN         NaN        10.0         NaN
1   2    e         NaN         NaN        20.0         NaN
2   1    f         NaN         NaN         1.0         NaN
0   1    d         NaN         NaN         NaN        10.0
1   2    e         NaN         NaN         NaN        20.0
2   1    f         NaN         NaN         NaN        15.0

Now you can group by name and id and take the sum:

df.groupby(['name','id']).sum().fillna('missing').reset_index()

If you will try it with the 48 dfs you will see it solves the MemoryError:

dfList = []
#To create the 48 DataFrames of size 62245 X 3
for i in range(0, 49):
    dfList.append(pd.DataFrame(np.random.randint(0,100,size=(62245, 3)), columns=['name',  'id',  'pricepart' + str(i + 1)]))

df = pd.concat(dfList)
df.groupby(['name','id']).sum().fillna('missing').reset_index()
theletz
  • 1,713
  • 2
  • 16
  • 22
  • This is identical to my answer and worse because it replicates the name and ID column. -1 – cs95 Jun 23 '18 at 16:18
  • Also, nothing was said about a groupby being needed here. – cs95 Jun 23 '18 at 16:19
  • Maybe I wasn`t clear enough. I edited the answer. Did you try to run it with the 48 dfs? It solves the MemoryError... – theletz Jun 23 '18 at 19:19
  • Just because it worked for you doesn't mean it'd work for OP. And that wasn't the _point_ of my last comment, I've already mentioned a `concat` solution before you did. So if your answer fixes the memory error, so will mine. – cs95 Jun 23 '18 at 19:21