I'm trying to speed up this script to read 95+ xlsx files and split out th files that contain multiple sheets into individual xlsx files. Right now, the script is crawling. Is there any way to speed it up?
import glob
listOfFiles = glob.glob('/path/*.xlsx')
for doc in listOfFiles:
wb = load_workbook(filename=(doc))
for sheet in wb.worksheets[1:]:
new_wb = Workbook()
ws = new_wb.active
for row_data in sheet.iter_rows():
for row_cell in row_data:
ws[row_cell.coordinate].value = row_cell.value
new_wb.save('/newpath/{0}.xlsx'.format(sheet.title))