I have a list of LPs which I want to solve in parallel.
So far I have tried both multiprocessing
and joblib
. But both use only 1 CPU (out of 8).
My code
import subprocess
from multiprocessing import Pool, cpu_count
from scipy.optimize import linprog
import numpy as np
from joblib import Parallel, delayed
def is_in_convex_hull(arg):
A,v = arg
res = linprog(np.zeros(A.shape[1]),A_eq = A,b_eq = v)
return res['success']
def convex_hull_LP(A):
pool = Pool(processes = cpu_count())
res = pool.map(is_in_convex_hull,[(np.delete(A,i,axis=1),A[:,i]) for i in range(A.shape[1])])
pool.close()
pool.join()
return [i for i in range(A.shape[1]) if not res[i]]
Now in IPyton I run
A = np.random.randint(0,60,size = (40,300))
%time l1 = convex_hull_LP(A)
%time l2 = Parallel(n_jobs=8)(delayed(is_in_convex_hull)((np.delete(A,i,axis=1),A[:,i])) for i in range(A.shape[1]))
which both result in about 7 seconds, but using only a single CPU, although 8 different process-IDs are shown.
Other Threads
- With the answer from Python multiprocessing.Pool() doesn't use 100% of each CPU I got 100% on all, but I think an LP is complicated enough, to be the bottleneck.
- I couldn't make sense of Multiprocess in python uses only one process
My Questions
- How can I split the jobs over all available CPUs?
- Or is it even possible to run this on the GPU?