I have mainly used numpy and joblib to constructe a machine learning pkg. I developed it on Python3.5.x. The numpy mainly played a role in the computation part and joblib devoted to implement the parallel part. While I had completed the development process under Python3.5.x. I re-tested the pkg on Python3.6.x and Python3.7.x. The pkg under Python3.5.x went well that used the number I set of CPUs while under Python3.6.x used less CPUs and under Python3.7.x used almost half of CPUs of my computer.
Asked
Active
Viewed 29 times
0
-
Can you add some more details. E.g. a table of your results – mpSchrader Oct 16 '20 at 07:10
-
I used Parallel in joblib to implement multi-threads and set the number of workers (CPUs) to ten. The pkg run well in the Python3.5.x which used ten CPUs as I wished. However, when I re-test the pkg under Python3.7.x, all of my CPUs was included in that computation even though the number of workers I set still equaled to ten. I have tried to use environment variable such as MKL_NUM_THREADS refering to https://stackoverflow.com/questions/30791550/limit-number-of-threads-in-numpy. This was worked for Python3.7.x, but for Python3.6.x and Python3.8.x, it didn't work. – Tao Li Oct 16 '20 at 10:12
-
I think this may be related to the version of numpy. – Tao Li Oct 16 '20 at 10:14