I have a program where I want to minimize an absolute difference of two variables (an absolute error function). Say:
e_abs(x, y) = |Ax - By|; where e_abs(x, y) is my objective function that I want to minimize.
The function is subjected to the following constrains:
x and y are integers;
x >= 0; y >= 0
x + y = C, where C is an arbitrary constant (also C >= 0)
I am using the mip library (https://www.python-mip.com/), where I have defined both my objective function and my constrains.
The problem is that mip does not have an "abs" method. So I had to overcome that by spliting the main problem into two optimization sub-problems:
e(x, y) = Ax - By
Porblem 1: minimize e(x, y); subject to e(x, y) >= 0
Porblem 2: maximize e(x, y); subject to e(x, y) <= 0
After solving the two separate problems, compare the two results, yield the min(abs(e))
.
That should have worked, but mip does not seem to understand that the error can be negative. As I show below:
constr(0): -1.0941176470588232 X(0, 0) +6.199999999999998 X(1, 0) - error = -0.0
constr(1): error <= -0.0
constr(2): X(0, 0) + X(1, 0) = 1.0
Obs.: consider X(0, 0) as x and X(1, 0) as y in our example
Again, the program results OptimizationStatus.INFEASIBLE
, where clearly the combination X(0, 0) = 1 and X(1, 0) = 0
solves the problem.
Is it a formulation issue of my model? Or is it a bad behavior of the mip library?