2

I am trying to solve a capacitated facility location problem in R. The sample data for that:

n<- 500 #number of customers
m<- 20 #number of facility centers

set.seed(1234)

fixedcost <- round(runif(m, min=5000, max=10000))

warehouse_locations <- data.frame(
  id=c(1:m),
  y=runif(m, 22.4, 22.6),
  x= runif(m, 88.3, 88.48)
)

customer_locations <- data.frame(
  id=c(1:n),
  y=runif(n, 22.27, 22.99),
  x= runif(n, 88.12, 88.95)
)

capacity <- round(runif(m, 1000, 4000))
demand <- round(runif(n, 5, 50))

The model with the cost functions:

library(geosphere)

transportcost <- function(i, j) {
  customer <- customer_locations[i, ]
  warehouse <- warehouse_locations[j, ]
  (distm(c(customer$x, customer$y), c(warehouse$x, warehouse$y), fun = distHaversine)/1000)*20
}


library(ompr)
library(magrittr)
model <- MIPModel() %>%
  # 1 iff i gets assigned to SC j
  add_variable(x[i, j], i = 1:n, j = 1:m, type = "binary") %>%
  
  # 1 if SC j is built
  add_variable(y[j], j = 1:m, type = "binary") %>%
  
  # Objective function
  set_objective(sum_expr(transportcost(i, j) * x[i, j], i = 1:n, j = 1:m) + 
                  sum_expr(fixedcost[j] * y[j], j = 1:m), "min") %>%
  
  #Demand of customers shouldn't exceed total facility capacities
  add_constraint(sum_expr(demand[i] * x[i, j], i = 1:n) <= capacity[j] * y[j], j = 1:m) %>%
  
  # every customer needs to be assigned to a SC
  add_constraint(sum_expr(x[i, j], j = 1:m) == 1, i = 1:n) %>% 
  
  # if a customer is assigned to a SC, then this SC must be built
  add_constraint(x[i,j] <= y[j], i = 1:n, j = 1:m)
model



library(ompr.roi)
library(ROI.plugin.glpk)
result <- solve_model(model, with_ROI(solver = "glpk", verbose = TRUE))

At this moment, the computation is being done for the results. Results

Is there any way I can reduce the computation times? If I understand it correctly then 0.4% is the difference between the current model and the desired outcome. I will be happy even if the difference is far greater than that and I can obtain a suitable model. Is there any way I can set that? Like 5-6% difference will be good enough.

Shibaprasadb
  • 1,307
  • 1
  • 7
  • 22
  • 2
    You may want to try Symphony. See discussion here: https://github.com/dirkschumacher/ompr/issues/290 – Erwin Kalvelagen Jun 16 '21 at 12:35
  • This did the job. Was very fast. Thanks. (P.S: Big fan/follower of your blog) – Shibaprasadb Jun 17 '21 at 05:16
  • It has been very slow when I try the Symphony in my real-time data which has like 7500 customer points and 8 Service centers. I set the gap to 3%, it ran for 1.5 hours and didn't yield anything. – Shibaprasadb Jun 17 '21 at 08:14

3 Answers3

2

Took the help from @Erwin Kalvelagen's comment. Used the symphony solver and edited one line:

library(ROI.plugin.symphony)
result <- solve_model(model, with_ROI(solver = "symphony",
                                      verbosity=-1, gap_limit=1.5))

Processing time reduced a lot and got the answer!

Shibaprasadb
  • 1,307
  • 1
  • 7
  • 22
  • Can anyone explain why Symphony is a better solver? Or point to a resource/reference for what solvers available within ROI are faster vs slower? I can't get symphony to run in my context... – Bajcz Jan 06 '22 at 14:27
  • @Bajcz what is happening with Symphony for your case? Also, I don't think one is better than the other, I think the final solution (if feasible) is the same for both. For this problem, I used symphony because I could set the ```gap_limit```. Even today, I switched from ```symphony``` to ```glpk``` so that I can set the time limit. – Shibaprasadb Jan 06 '22 at 15:47
  • for some reason I can't get Symphony's libraries to install on my U's supercomputing cluster. But I know there are something like 17 solvers in ROI so I figured there had to be some advantages to some over others. – Bajcz Jan 06 '22 at 16:22
1

You can try the below 3 approaches

  1. You can test by reformulating the last constraint.

if a customer is assigned to a SC, then this SC must be built

You can use the following instead of the current constraint add_constraint(sum_expr(x[i,j], i = 1:n)<= y[j], j = 1:m)

This should reduce the run time without affecting the output.

  1. Apart from that, you can add termination criteria based on minimum optimality gap you want or/and maximum run time you want the model to run.

  2. You can also try to use some other solver instead of glpk and see it helps.

-1

R and Python libraries are very slow for MIP try with lp solve open source solver

  • 1
    Can you provide additional guidance as to how someone, specifically an R user, would go about doing this? Some of us need to work in R and/or may be unfamiliar with open source options beyond it... – Bajcz Dec 06 '21 at 18:03