44

I would like to read several excel files from a directory into pandas and concatenate them into one big dataframe. I have not been able to figure it out though. I need some help with the for loop and building a concatenated dataframe: Here is what I have so far:

import sys
import csv
import glob
import pandas as pd

# get data file names
path =r'C:\DRO\DCL_rawdata_files\excelfiles'
filenames = glob.glob(path + "/*.xlsx")

dfs = []

for df in dfs: 
    xl_file = pd.ExcelFile(filenames)
    df=xl_file.parse('Sheet1')
    dfs.concat(df, ignore_index=True)
jonas
  • 13,559
  • 22
  • 57
  • 75
  • 1
    Your code in the other question was just fine, just replace `read_csv` with `read_excel`. – joris Jan 03 '14 at 16:22
  • 1
    Your code here is not really correct (it was in the other question). You cannot loop over the empty list `dfs` you just created, so loop iver the filenames, then `dfs.append(df)` in the loop, and after that `pd.concat(dfs, ignore_index=True)` – joris Jan 03 '14 at 16:27
  • could you edit in my code, since I'm a beginner at python – jonas Jan 03 '14 at 16:27
  • Please have a look at your other question. – joris Jan 03 '14 at 16:28

8 Answers8

100

As mentioned in the comments, one error you are making is that you are looping over an empty list.

Here is how I would do it, using an example of having 5 identical Excel files that are appended one after another.

(1) Imports:

import os
import pandas as pd

(2) List files:

path = os.getcwd()
files = os.listdir(path)
files

Output:

['.DS_Store',
 '.ipynb_checkpoints',
 '.localized',
 'Screen Shot 2013-12-28 at 7.15.45 PM.png',
 'test1 2.xls',
 'test1 3.xls',
 'test1 4.xls',
 'test1 5.xls',
 'test1.xls',
 'Untitled0.ipynb',
 'Werewolf Modelling',
 '~$Random Numbers.xlsx']

(3) Pick out 'xls' files:

files_xls = [f for f in files if f[-3:] == 'xls']
files_xls

Output:

['test1 2.xls', 'test1 3.xls', 'test1 4.xls', 'test1 5.xls', 'test1.xls']

(4) Initialize empty dataframe:

df = pd.DataFrame()

(5) Loop over list of files to append to empty dataframe:

for f in files_xls:
    data = pd.read_excel(f, 'Sheet1')
    df = df.append(data)

(6) Enjoy your new dataframe. :-)

df

Output:

  Result  Sample
0      a       1
1      b       2
2      c       3
3      d       4
4      e       5
5      f       6
6      g       7
7      h       8
8      i       9
9      j      10
0      a       1
1      b       2
2      c       3
3      d       4
4      e       5
5      f       6
6      g       7
7      h       8
8      i       9
9      j      10
0      a       1
1      b       2
2      c       3
3      d       4
4      e       5
5      f       6
6      g       7
7      h       8
8      i       9
9      j      10
0      a       1
1      b       2
2      c       3
3      d       4
4      e       5
5      f       6
6      g       7
7      h       8
8      i       9
9      j      10
0      a       1
1      b       2
2      c       3
3      d       4
4      e       5
5      f       6
6      g       7
7      h       8
8      i       9
9      j      10
ericmjl
  • 13,541
  • 12
  • 51
  • 80
  • 4
    This is certainly OK, but I think the approach in the almost identical question http://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-python-pandas-and-concatenate-into-one-dataframe to append to a list and then `pd.concat(the_list)` is cleaner. – joris Jan 03 '14 at 16:37
  • Thank, you. I could actually understand this. But why the f[-3:] in the statement : files_xls = [f for f in files if f[-3:] == 'xls'] – jonas Jan 03 '14 at 16:41
  • 1
    Glad to be of help! I was where you were about 6 months ago learning Pandas, so I'm glad to be of any help. `f[-3:]` is me parsing each string. The `files` list is essentially a list of strings. Therefore, in the list comprehension, I am asking for files (i.e. strings) whose extensions, i.e. the last 3 characters, are "xls". – ericmjl Jan 03 '14 at 16:44
  • I am late to this, but I had a small doubt in this case. What if there were multiple sheets in these excel files? How to bring those in as well? – Manas Jani Sep 15 '17 at 12:38
  • @ManasJani: you can check the docs for `pd.read_excel` (they are [here](http://pandas.pydata.org/pandas-docs/version/0.20/generated/pandas.read_excel.html)). There is a `sheetname` argument that can be used. – ericmjl Sep 15 '17 at 19:01
  • Yes, it worked. Already did it. The problem I am facing now is that, there is a dictionary of dataframes present, but I want to concat all of them. But the pd.concat command isn't working for me. – Manas Jani Sep 15 '17 at 19:32
  • Post a new question that references this SO Question, and let's see what the community has for you :) – ericmjl Sep 17 '17 at 00:11
  • How could you do this when excel files are stored on a microsoft sharepoint site? – 0004 Aug 29 '18 at 23:01
  • I'd definitely do for pd.concat because pd.append is super slow. – Aakash Verma Feb 10 '20 at 02:51
6

this works with python 2.x

be in the directory where the Excel files are

see http://pbpython.com/excel-file-combine.html

import numpy as np
import pandas as pd
import glob
all_data = pd.DataFrame()
for f in glob.glob("*.xlsx"):
    df = pd.read_excel(f)
    all_data = all_data.append(df,ignore_index=True)

# now save the data frame
writer = pd.ExcelWriter('output.xlsx')
all_data.to_excel(writer,'sheet1')
writer.save()    
john blue
  • 93
  • 1
  • 5
5

There is an even neater way to do that.

# import libraries
import glob
import pandas as pd

# get the absolute path of all Excel files 
allExcelFiles = glob.glob("/path/to/Excel/files/*.xlsx")

# read all Excel files at once
df = pd.concat(pd.read_excel(excelFile) for excelFile in allExcelFiles)
zoump
  • 151
  • 2
  • 5
4

You can use list comprehension inside concat:

import os
import pandas as pd

path = '/path/to/directory/'
filenames = [file for file in os.listdir(path) if file.endswith('.xlsx')]

df = pd.concat([pd.read_excel(path + file) for file in filenames], ignore_index=True)

With ignore_index = True the index of df will be labeled 0, …, n - 1.

serghei
  • 3,069
  • 2
  • 30
  • 48
rachwa
  • 1,805
  • 1
  • 14
  • 17
1

This can be done in this way:

import pandas as pd
import glob

all_data = pd.DataFrame()
for f in glob.glob("/path/to/directory/*.xlsx"):
    df = pd.read_excel(f)
    all_data = all_data.append(df,ignore_index=True)

all_data.to_csv("new_combined_file.csv")  
Abhilash Ramteke
  • 367
  • 1
  • 4
  • 11
1

#shortcut

import pandas as pd 
from glob import glob

dfs=[]
for f in glob("data/*.xlsx"):
    dfs.append(pd.read_excel(f))
df=pd.concat(dfs, ignore_index=True)
Dan
  • 46
  • 3
-1
import pandas as pd

import os

os.chdir('...')

#read first file for column names

fdf= pd.read_excel("first_file.xlsx", sheet_name="sheet_name")

#create counter to segregate the different file's data

fdf["counter"]=1

nm= list(fdf)

c=2

#read first 1000 files

for i in os.listdir():

  print(c)

  if c<1001:

    if "xlsx" in i:

      df= pd.read_excel(i, sheet_name="sheet_name")

      df["counter"]=c

      if list(df)==nm:

        fdf=fdf.append(df)

        c+=1

      else:

        print("headers name not match")

    else:

      print("not xlsx")


fdf=fdf.reset_index(drop=True)

#relax
TBhavnani
  • 721
  • 7
  • 12
-1
import pandas as pd
import os

files = [file for file in os.listdir('./Salesfolder')]
all_month_sales= pd.DataFrame()
for file in files
    df= pd.read_csv("./Salesfolder/"+file)
    all_months_data=pd.concat([all_months_sales,df])
all_months_data.to_csv("all_data.csv",index=False)

You can go and read all your .xls files from folder (Salesfolder in my case) and same for your local path. Using iteration through whcih you can put them into empty data frame and you can concatnate your data frame to this . I have also exported to another csv for all months data into one csv file

  • 1
    Hope It will solve issue but please add explanation of your code with it so user will get perfect understanding which he/she really wants. – Jaimil Patel May 25 '20 at 16:14