0

I have a list of LPs which I want to solve in parallel. So far I have tried both multiprocessing and joblib. But both use only 1 CPU (out of 8).

My code

import subprocess
from multiprocessing import Pool, cpu_count
from scipy.optimize import linprog
import numpy as np
from joblib import Parallel, delayed

def is_in_convex_hull(arg):
  A,v = arg
  res = linprog(np.zeros(A.shape[1]),A_eq = A,b_eq = v)
  return res['success']

def convex_hull_LP(A):
  pool = Pool(processes = cpu_count())
  res = pool.map(is_in_convex_hull,[(np.delete(A,i,axis=1),A[:,i]) for i in range(A.shape[1])])
  pool.close()
  pool.join()
  return [i for i in range(A.shape[1]) if not res[i]]

Now in IPyton I run

A = np.random.randint(0,60,size = (40,300))
%time l1 = convex_hull_LP(A)
%time l2 = Parallel(n_jobs=8)(delayed(is_in_convex_hull)((np.delete(A,i,axis=1),A[:,i])) for i in range(A.shape[1]))

which both result in about 7 seconds, but using only a single CPU, although 8 different process-IDs are shown.

Other Threads

My Questions

  • How can I split the jobs over all available CPUs?
  • Or is it even possible to run this on the GPU?
Hennich
  • 682
  • 3
  • 18
  • The OS *distributes* the processes to the various CPUs. How can you tell that only one CPU is being used? – wwii Jan 03 '18 at 20:44
  • @wwii That is what `htop` shows. – Hennich Jan 04 '18 at 11:05
  • If you would use the IPM-based-solver with a good (BLAS) setup, your solving would probably be parallelized in it's core (which is usually better than on the outer layer like you try to do). It's also hard to tell what the differences between those instances are and if there is not something more clever (usually there is; solving many LPs from scratch isn't clever in many cases)?. So maybe you want to add more specifics. – sascha Jan 05 '18 at 14:00

0 Answers0