I have to check how much time do_something()
takes in total for a pair of lists containing 30k elements. Below is my code
def run(a, b, data):
p = datetime.datetime.now()
val = do_something(a, b, data[0], data[1])
q = datetime.datetime.now()
res = (q - p).microseconds
return res
Next, I call this using the following code:
func = functools.partial(run, a, b)
x = np.linspace(500, 1000, 30000).tolist()
y = np.linspace(20, 500, 30000).tolist()
data = zip(x, y)
with multiprocessing.Pool(processes=multiprocessing.cpu_count()) as pool:
d = pool.map(func, data)
res = sum(d)
Whenever I run this, I keep getting OSError: [Errno 24] Too many open files
. How do I fix this?