I am trying to speed up the solving of a nonlinear least-squares problem in Python. I can compute both the function value and the Jacobian via one forwardpass, (val, jac) = fun
. A solver like scipy.optimize.least_squares
only accepts two seperate functions, fun
and jac
, which for my problem means that the function value has to be computed twice per iteration (once in fun
, and once in jac
).
Is there a trick, for avoiding solving the primal problem twice?
The more general function scipy.optimize.minimize
support the above style with the jac=True
keyword, but it's slow for my problem.
CodePudding user response:
From the documentation of scipy.optimize.minimize
:
If jac is a Boolean and is True, fun is assumed to return a tuple (f, g) containing the objective function and the gradient.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html?highlight=minimize
So you can simply do it like this:
from scipy.optimize import minimize
def function(x):
'''Function that returns both fun and jac'''
return x**2 - 5 * x 3, 2 * x - 5
print(minimize(function, 0, jac=True))
Edit, reread your question, it seems this option also works for least_squares
but is undocumented.
This works as well:
from scipy.optimize import least_squares
def function(x):
'''Function that returns both fun and jac'''
return x**2 - 5 * x 3, 2 * x - 5
print(least_squares(function, 0, jac=True))
CodePudding user response:
You can do a bit of a hack:
val_cache = {}
jac_cache = {}
def val_fun(*args):
try:
return val_cache.pop(args)
except KeyError:
(val, jac) = fun(*args)
jac_cache[args] = jac
return val
def jac_fun(*args):
try:
return jac_cache.pop(args)
except KeyError:
(val, jac) = fun(*args)
val_cache[args] = val
return jac