I have a set of large text files. Currently, I have a generator function which parse each file sequentially and yield a value for each line.
def parse():
for f in files:
for line in f:
# parse line
yield value
It takes 24 hours to iterate over all files! I'm interested to know is it possible to read multiple files in parallel and yield results in an efficient way?