I'm working on a scraper that pulls down files from a website and then parses them out for the end goal. The parser keeps failing when it reaches a file of 0 bytes (as it should). Is there a way to avoid saving 0B size files when they are extracted?
I don't have a code example, but what I'm doing is creating a temp folder with os.mkdir
and storing them there until they are parsed. I'm pulling them with xml.etreeElementTree
. Some psuedocode:
#pretend parse function is here
os.mkdir(r'C:\TEMPFILES_TO_PARSE')
for entry in filepath:
wb = xlrd.open_workbook(entry)
#begin parse function(s)
tl;dr would like to not save files of 0B to avoid error flags.