271

I have a list of Pandas dataframes that I would like to combine into one Pandas dataframe. I am using Python 2.7.10 and Pandas 0.16.2

I created the list of dataframes from:

import pandas as pd
dfs = []
sqlall = "select * from mytable"

for chunk in pd.read_sql_query(sqlall , cnxn, chunksize=10000):
    dfs.append(chunk)

This returns a list of dataframes

type(dfs[0])
Out[6]: pandas.core.frame.DataFrame

type(dfs)
Out[7]: list

len(dfs)
Out[8]: 408

Here is some sample data

# sample dataframes
d1 = pd.DataFrame({'one' : [1., 2., 3., 4.], 'two' : [4., 3., 2., 1.]})
d2 = pd.DataFrame({'one' : [5., 6., 7., 8.], 'two' : [9., 10., 11., 12.]})
d3 = pd.DataFrame({'one' : [15., 16., 17., 18.], 'two' : [19., 10., 11., 12.]})

# list of dataframes
mydfs = [d1, d2, d3]

I would like to combine d1, d2, and d3 into one pandas dataframe. Alternatively, a method of reading a large-ish table directly into a dataframe when using the chunksize option would be very helpful.

cs95
  • 379,657
  • 97
  • 704
  • 746
Whitebeard
  • 5,945
  • 5
  • 24
  • 31

6 Answers6

495

Given that all the dataframes have the same columns, you can simply concat them:

import pandas as pd
df = pd.concat(list_of_dataframes)
Trenton McKinney
  • 56,955
  • 33
  • 144
  • 158
DeepSpace
  • 78,697
  • 11
  • 109
  • 154
  • 1
    Note that column names must match for the data frames to be concatenated vertically. Otherwise, the default behavior is that they will be concatenated horizontally. – bbrame Dec 10 '22 at 15:52
21

Just to add few more details:

Example:

list1 = [df1, df2, df3]

import pandas as pd
  • Row-wise concatenation & ignoring indexes

    pd.concat(list1, axis=0, ignore_index=True)
    

    Note: If column names are not same then NaN would be inserted at different column values

  • Column-wise concatenation & want to keep column names

    pd.concat(list1, axis=1, ignore_index=False)
    

    If ignore_index=True, column names would be filled with numbers starting from 0 to (n-1), where n is the count of unique column names

rmswrp
  • 501
  • 5
  • 5
12

If the dataframes DO NOT all have the same columns try the following:

df = pd.DataFrame.from_dict(map(dict,df_list))
cs95
  • 379,657
  • 97
  • 704
  • 746
meyerson
  • 4,710
  • 1
  • 20
  • 20
  • 11
    This solution doesn't work for me on Python 3.6.5 / Pandas v0.23.0. It errors with `TypeError: data argument can't be an iterator`. Converting to `list` first (to mimic Python 2.7) comes up with unexpected results too. – jpp Jul 16 '18 at 22:59
  • and if the all dataframes have the same column, how should we do ? – Nadhir Mar 16 '20 at 20:37
8

You also can do it with functional programming:

from functools import reduce
reduce(lambda df1, df2: df1.merge(df2, "outer"), mydfs)
cs95
  • 379,657
  • 97
  • 704
  • 746
Jay Wang
  • 2,650
  • 4
  • 25
  • 51
  • 2
    `from functools import reduce` to use `reduce` – nishant Apr 24 '20 at 12:38
  • 3
    Would not recommend doing a pairwise merge for multiple DataFrames, it is not efficient at all. See `pd.concat` or `join`, both accept a list of frames and join on the index by default. – cs95 Jun 29 '20 at 06:09
0

concat also works nicely with a list comprehension pulled using the "loc" command against an existing dataframe

df = pd.read_csv('./data.csv') # ie; Dataframe pulled from csv file with a "userID" column

review_ids = ['1','2','3'] # ie; ID values to grab from DataFrame

# Gets rows in df where IDs match in the userID column and combines them 

dfa = pd.concat([df.loc[df['userID'] == x] for x in review_ids])
Lelouch
  • 549
  • 6
  • 6
0

pandas concat works also as well in addition with functools

from functors import reduce as reduce
import pandas as pd
deaf = pd.read_csv("http://www.aol.com/users/data.csv")
for q in range(0, Len(deaf)):
  new = map(lambda x: reduce(pd.concat(x))
rkellerm
  • 5,362
  • 8
  • 58
  • 95