I have 5 csv files in a folder: 1.csv, 2.csv, 3.csv, 4.csv, 5.csv. All files having the same structure and column names.
I would like all of the files to be in a single pandas dataframe, df. Is there anyway to achieve this?
I have 5 csv files in a folder: 1.csv, 2.csv, 3.csv, 4.csv, 5.csv. All files having the same structure and column names.
I would like all of the files to be in a single pandas dataframe, df. Is there anyway to achieve this?
df1 = pd.read_csv('1.csv')
df2 = pd.read_csv('2.csv')
df1.concat(df2)
# this will concat rows from df1 into df2
glob
can be a useful package for this, along with the pandas.concat
method.
This approach creates a list of the *.csv
files in a directory, reads each one to a DataFrame which is then appended to a list, and then the list is concatenated together into one DataFrame.
import glob
import pandas as pd
dfs = []
for fin in glob.glob('*.csv'):
dfs.append(pd.read_csv(fin))
df = pd.concat(dfs)