Im using GLPK with through Julia, and I need to optimize the same GLPK.Prob repeatedly. What changes between each optimization is that some combination of variables gets fixed to 0
simply put in pseudocode
lp = GLPK.Prob()
variables_to_block = [[1,2,3], [2,3], [6,7], [19,100,111]...]
for i in variables_to_block
block_vars(lp, i)
simplex(lp)
restore_vars(lp, i)
end
when I then run this, it looks like CPU1 acts as a scheduler, staying in the 9-11% range, and the load on CPU3 and CPU4 alternates between 0 and 100%, though never at the same time...the load on CPU2 stays at 0%
This can take a bit of time and I'd like to use all cores
There is however, a bit of a hassle to use the parallel features of Julia, particularly for lp-models because they involve pointers so (as far as I know) they cannot be copied easily between cores
Is there a way to either set the GLPK solver binary (or something) to automatically try to fully utilize all cores? either by compiling GLPK in such a way, or any other way