I am trying to read a list that contains N number of .csv files stored in a list synchronously.
Right now I do the following:
import multiprocess
- Empty list
- Append list with listdir of .csv's
- def A() -- even files (list[::2])
- def B() -- odd files (list[1::2]
- Process 1 def A()
Process 2 def B()
def read_all_lead_files(folder): for files in glob.glob(folder+"*.csv"): file_list.append(files) def read_even(): file_list[::2] def read_odd(): file_list[1::2] p1 = Process(target=read_even) p1.start() p2 = Process(target=read_odd) p2.start()
Is there a faster way to split up the partitioning of the list to Process function?