I have a problem of the form:
min (x1 - k1) (x2 -k2) ...
k are constants
with linear constraints:
A*x = B
B's size is 3x1, A is 3xn and x is nx1
I also have linear bounds inequalities in the form
l1 <= x1 <= u1
l2 <= x2 <= u2 ...etc
I'm trying to resolve it in python with scipy :
import numpy as np
from scipy.optimize import Bounds, minimize, fmin_cobyla
A = \
np.array([[ 0.11, 0.1333, 0.1333, 0.01],
[ 0.02, 6.667, 0.1333, 0.12],
[0.0933, 0.6667, 0.6, 0.01]])
B = \
np.array([[25],
[57],
[28]])
l = \
np.array([[50],
[20],
[5],
[50]])
u = \
np.array([[200],
[80],
[20],
[200]])
def objfun(x):
return np.linalg.norm((x - (u-l)/2))
x0 = \
np.array([[1],
[2],
[3],
[4]])
bounds = Bounds(l, u)
eq_cons = {'type': 'eq', 'fun': lambda x: np.matmul(A,x)-B}
res = minimize(objfun, x0, method='SLSQP',
constraints=[eq_cons], options={'ftol': 1e-9, 'disp': True},
bounds=bounds)
but it gives me the following error:
all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s)
Thanks for the help.
CodePudding user response:
Your "vectors" must be dimension 1; yours are dimension 2.
import numpy as np
from scipy.optimize import Bounds, minimize, fmin_cobyla
A = \
np.array([[ 0.11, 0.1333, 0.1333, 0.01],
[ 0.02, 6.667, 0.1333, 0.12],
[0.0933, 0.6667, 0.6, 0.01]])
B = \
np.array([25,
57,
28])
l = \
np.array([50,
20,
5,
50])
u = \
np.array([200,
80,
20,
200])
def objfun(x):
return np.linalg.norm((x - (u-l)/2))
x0 = \
np.array([1,
2,
3,
4])
bounds = Bounds(l, u)
eq_cons = {'type': 'eq', 'fun': lambda x: np.matmul(A,x)-B}
res = minimize(objfun, x0, method='SLSQP',
constraints=[eq_cons], options={'ftol': 1e-9, 'disp': True},
bounds=bounds)
print(res)
gives
Positive directional derivative for linesearch (Exit mode 8)
Current function value: 36.82729965663988
Iterations: 34
Function evaluations: 298
Gradient evaluations: 30
fun: 36.82729965663988
jac: array([-0.67884445, -0.27153778, -0.06788445, -0.67884445])
message: 'Positive directional derivative for linesearch'
nfev: 298
nit: 34
njev: 30
status: 8
success: False
x: array([50., 20., 5., 50.])
This is not my field, but take note of Reinderien's comment: scipy.optimize does have milp
and linprog
specifically for linear programming.
CodePudding user response:
As already mentioned in the comments and Rorys answer, you should use scipy.optimize.linprog
for linear problems (LPs), i.e. your objective function and the constraint functions are linear. Only use scipy.optimize.minimize
for nonlinear problems. The former interfaces sophisticated high-performance solvers (like HiGHS) for LPs, and thus it will have much better performance in most cases.
That being said, it's also important to stress that minimizing the linear objective function
c^T * (x-k) = 1*(x1-k1) 1*(x2-k2) ...
is not equivalent to minimizing the euclidean vector norm of your objective
||c^T * (x-k)|| = sqrt( (1*(x1-k1) 1*(x2-k2) ...)**2 )
as it makes your objective function nonlinear and changes the fundamental properties of your optimization problem. Spoiler alert: your LP is infeasible, i.e. it doesn't have an optimal solution whilst your nonlinear problem (NLP) has one.
Here's a code snippet that solves your LP via linprog
:
import numpy as np
from scipy.optimize import linprog
# Linear constraint A*x = B
A = np.array([[ 0.11, 0.1333, 0.1333, 0.01],
[ 0.02, 6.667, 0.1333, 0.12],
[0.0933, 0.6667, 0.6, 0.01]])
B = np.array([25, 57, 28])
# Bounds l <= x <= u
l = np.array([50, 20, 5, 50])
u = np.array([200, 80, 20, 200])
bounds = [(ll, uu) for ll,uu in zip(l, u)]
# objective f(x) = c^T * x - c^T*k (vector notation)
c = np.ones(4)
k = (u-l)/2
# we minimize c^T*x s.t. A*x = B, l <= x <= u
# In order to obtain the objective value of the original LP, we
# simply subtract c^T*k from the objective value res.fun
res = linprog(c, A_ub=None, b_ub=None, A_eq=A, b_eq=B, bounds=bounds)