I am using the multiprocessing module from Python in order to speed-up my code, but for some reason the program ends up using more than the number of cores I specify. Indeed, when looking at htop
to check the core usage, I see that all 60 available cores are used at times instead of 10 as needed.
As an example, this code already produces the behavior:
import numpy as np
import multiprocessing
def square(i):
size = 10000
return np.dot(np.random.normal(size=size), np.random.normal(size=(size, size)) @ np.random.normal(size=size))
with multiprocessing.Pool(10) as pool:
t = pool.map(square, range(100))
I am running this in a Jupyter Notebook inside a JupyterHub. Up until a few hours ago, everything was running smoothly (meaning that the number of cores used was not going beyond 10), the only big change that happened is that I updated all my packages via conda update --all
and specifically updated scipy with conda install -c conda-forge scipy
. However I have no idea what could have caused the issue.