0

I have the following vectors (or C matrix and V vector) and K1, K2, K3 constants

Constraints      [c11 + c12 + c13 + ... + c1n] <= K1
                 [c21 + c22 + c23 + ... + c2n] <= K2
                 [c31 + c32 + c33 + ... + c3n] <= K3
-------------------------------------------------
Values           [ v1 +  v2 +  v3 + ... +  vn] -> Max

As an input I am getting values of C and V, and as an output I want to provide the X vector which contains only 0 and 1 values and gives me

          [c11 * x1 + c12 * x2 + c13 * x3 + ... + c1n * xn <= K1
          [c21 * x1 + c22 * x2 + c23 * x3 + ... + c2n * xn <= K2
          [c31 * x1 + c32 * x2 + c33 * x3 + ... + c3n * xn <= K3
          ------------------------------------------------------
          [ v1 * x1 +  v2 * x2 +  v3 * x3 + ... + vn *  xn] -> Max

As an over simplified example:

Input:

          K1 = 15
          K2 = 20
          K3 = 10

          c1 = [3, 6, 8]    | sum(c1 * X) <= 15 
          c2 = [8, 9, 3]    | sum(c2 * X) <= 20
          c3 = [7, 5, 2]    | sum(c3 * x) <= 10
           v = [2, 5, 3]    | sum( v * X) -> Max

Output providing X vector which is maximizing the value within the constrains:

           X = [0, 1, 1]

I am looking for an elegant algorithm (may be a Java or C# implementation as well) giving me the output based on the input. We can assume that the number of constraints is always 3 and all values of C and V (and K1, K2, K3) are provided.


Another simple example could be: You have a room (3D), so your constraints are the width, height and length of the room (K1, K2, K3) and you have a list of furniture items (n items). All i furniture item has it's own lenght (c1i), width (c2i) and height (c3i) and value (vi). You want to pack the room with the most valuable furniture items which fits inside the rooms dimensions. So the output is an n-long X variable which contains only 0 and 1 values, if xi = 1, the i-th element gets picked to be in the room, if xi = 0 the ith element won't be picked to be in the room.

DDan
  • 8,068
  • 5
  • 33
  • 52
  • That's integer linear programming, but if X is only three elements each either 0 or 1 then it's much simpler to just try all 8 of them. – harold Jan 26 '16 at 14:47
  • That's why I mention my example is over simplified. X has n elements, same as V. Basically every i element has it's own c1i, c2i, c3i and vi value and the element either gets picked (xi is 1) or not (xi is 0) – DDan Jan 26 '16 at 14:50
  • How big is the vector X? – Sorin Jan 26 '16 at 15:38
  • Like I mentioned in my earlier comment: X has n elements, same as V. Basically every i element has it's own c1i, c2i, c3i and vi value and the element either gets picked (xi is 1) or not (xi is 0) – DDan Jan 26 '16 at 15:45
  • Another simple example could be: You have a room 3D, so your constraints are the width, height and length of the room (K1, K2, K3) and you have a list of furniture items (n items). All i furniture item has it's own lenght (c1i), width (c2i) and height (c3i) and value (vi). You want to pack the room with the most valuable furniture items which fits. – DDan Jan 26 '16 at 15:51
  • 1
    What you are trying to solve is called a [multidimensional knapsack problem](https://en.wikipedia.org/wiki/List_of_knapsack_problems), see [this post](https://stackoverflow.com/questions/1827600/multiple-constraint-knapsack-problem) – serge_k Jan 26 '16 at 15:56
  • @serge_k : and this problem is known to be NP-Hard so don't expect an optimal value on a reasonably sized problem – Seb Jan 26 '16 at 15:58
  • Yes correct, that's exactly what I want to solve with m = 3 (number of constraints) So I should go by merging variables and running the 1 dimensional knapsack dynamic programming solution? – DDan Jan 26 '16 at 16:00
  • Let's say I multiply c2 vector with 1.000 and c3 vector with 100.000? Is that how it works? Isn't that making it that one constraint is overruling all others? – DDan Jan 26 '16 at 16:02
  • AFAICT this doesn't model packing 3D objects in a box, your constraints say that the sum of the sizes must not exceed the size of the box (in all three dimensions), but that is neither necessary nor sufficient for a 3D packing (consider that 8 unit cubes fit in a 2x2x2 box, but that violates every constraint) – harold Jan 26 '16 at 16:34

2 Answers2

1

This is the multidimensional 0-1 knapsack problem, which is NP-hard.

An overview of solution methods can be found here, a relatively recent research paper here and a genetic algorithm implementation in python here.

Taken from the python implementation (link pyeasyga above) is this example:

from pyeasyga import pyeasyga

# setup data
data = [(821, 0.8, 118), (1144, 1, 322), (634, 0.7, 166), (701, 0.9, 195),
        (291, 0.9, 100), (1702, 0.8, 142), (1633, 0.7, 100), (1086, 0.6, 145),
        (124, 0.6, 100), (718, 0.9, 208), (976, 0.6, 100), (1438, 0.7, 312),
        (910, 1, 198), (148, 0.7, 171), (1636, 0.9, 117), (237, 0.6, 100),
        (771, 0.9, 329), (604, 0.6, 391), (1078, 0.6, 100), (640, 0.8, 120),
        (1510, 1, 188), (741, 0.6, 271), (1358, 0.9, 334), (1682, 0.7, 153),
        (993, 0.7, 130), (99, 0.7, 100), (1068, 0.8, 154), (1669, 1, 289)]

ga = pyeasyga.GeneticAlgorithm(data)        # initialise the GA with data
ga.population_size = 200                    # increase population size to 200 (default value is 50)

# define a fitness function
def fitness(individual, data):
    weight, volume, price = 0, 0, 0
    for (selected, item) in zip(individual, data):
        if selected:
            weight += item[0]
            volume += item[1]
            price += item[2]
    if weight > 12210 or volume > 12:
        price = 0
    return price

ga.fitness_function = fitness               # set the GA's fitness function
ga.run()                                    # run the GA
print ga.best_individual()                  # print the GA's best solution

the last dimension of data is price and the two other dimensions are weight and volume.

You can adjust this example so that it solves problems with more than two dimensions.

I hope that helps.

EDIT: The genetic algorithm does not guarantee in general that it finds the optimal solution. For three constraints it will probably find good solutions, but there is no guarantee of optimality.


UPDATE: Mathematical Optimization Solution

One other option is to use PuLP, an open source modeling framework for mathematical optimization problems. This framework invokes a solver, i.e., a piece of software designed specifically to solve optimization problems. In a nutshell, the job of the framework is to link the mathematical problem description with the form it needs to have when solved, and the job of the solver is to actually solve the problem.

You can install pulp with e.g, pip (pip install pulp).

Here is the previous example modeled in pulp, by modifying this example:

import pulp as plp

# Let's keep the same data
data = [(821, 0.8, 118), (1144, 1, 322), (634, 0.7, 166), (701, 0.9, 195),
        (291, 0.9, 100), (1702, 0.8, 142), (1633, 0.7, 100), (1086, 0.6, 145),
        (124, 0.6, 100), (718, 0.9, 208), (976, 0.6, 100), (1438, 0.7, 312),
        (910, 1, 198), (148, 0.7, 171), (1636, 0.9, 117), (237, 0.6, 100),
        (771, 0.9, 329), (604, 0.6, 391), (1078, 0.6, 100), (640, 0.8, 120),
        (1510, 1, 188), (741, 0.6, 271), (1358, 0.9, 334), (1682, 0.7, 153),
        (993, 0.7, 130), (99, 0.7, 100), (1068, 0.8, 154), (1669, 1, 289)]

w_cap, v_cap = 12210, 12

rng_items = xrange(len(data))

# Restructure the data in dictionaries
items = ['item_{}'.format(i) for i in rng_items]
weight = {items[i]: data[i][0] for i in rng_items}
volume = {items[i]: data[i][1] for i in rng_items}
price = {items[i]: data[i][2] for i in rng_items}

# Make the problem, declare it as a maximization problem
problem_name = "3D Knapsack"
prob = plp.LpProblem(problem_name, plp.LpMaximize)

# Define the variables
plp_vars = plp.LpVariable.dicts('', items, 0, 1, plp.LpInteger) 

# Objective function
prob += plp.lpSum([price[i]*plp_vars[i] for i in plp_vars])

# Constraints
prob += plp.lpSum([weight[i]*plp_vars[i] for i in plp_vars]) <= w_cap
prob += plp.lpSum([volume[i]*plp_vars[i] for i in plp_vars]) <= v_cap

# Solution
prob.solve()

# If you want to save the problem formulation in a file
# prob.writeLP(problem_name + 'lp')

# Each of the variables is printed with it's resolved optimum value
for v in prob.variables():
    print v.name, "=", v.varValue

# The optimised objective function value is printed to the screen    
print "Total gain = ", plp.value(prob.objective)

with an objective of 3,540.

A demonstration of how this runs is here.

Ioannis
  • 5,238
  • 2
  • 19
  • 31
  • This looks promising. Thanks! – DDan Jan 26 '16 at 16:05
  • Indeed to emphasize the last point: when I ran this Python code I get an objective of 3442. The docs report an objective of 3531. I believe the optimal solution of this problem has an objective of 3540. – Erwin Kalvelagen Jan 26 '16 at 17:54
  • @ErwinKalvelagen True. For small problems it might be a good option to initialize several random seeds, run it in a loop and keep the best result. I will post a `pulp` formulation with `cbc` if I find time. I am trying to keep my answer in the open source sphere :) – Ioannis Jan 27 '16 at 10:41
1

It can be solved exactly, for example using Gurobi's C# bindings:

var env = new GRBEnv();
var model = new GRBModel(env);
var vars = model.AddVars(v.Length, GRB.BINARY);
model.Update();
GRBLinExpr obj = 0.0;
obj.AddTerms(v, vars, 0, v.Length);
model.SetObjective(obj, GRB.MAXIMIZE);
for (int i = 0; i < 3; i++) {
    GRBLinExpr expr = 0.0;
    for (int j = 0; j < C[i].Length; j++)
        expr.AddTerm(C[i][j], vars[j]);
    model.AddConstr(expr, GRB.LESS_EQUAL, K[i], "");
}
model.Update();
model.Optimize();

Then you can call Get(GRB.DoubleAttr.X) on the variables to get their value.

harold
  • 61,398
  • 6
  • 86
  • 164
  • Thanks for the tip! How can I run this? Do you have some kind of tutorial to set up? Is it free? – DDan Jan 27 '16 at 10:20
  • @DDan Gurobi is sometimes free, for academic use, pretty expensive for commercial use though. GLPK works a bit differently, but the above code would be easy to adapt. – harold Jan 27 '16 at 10:25