I'm working on fitting of the experimental data. In order to fit it I use the minimization of the function of residual. Everything is quite trivial, but but this time I can't find what's wrong and why the result of fitting is so weird. The example is simplified in comparison with original problem. But anyway it gives wrong parameters even when I set used values of parameters as initial guess.
import matplotlib.pyplot as plt
import numpy as np
import csv
from scipy.optimize import curve_fit, minimize
x=np.arange(0,10,0.5)
a=0.5
b=3
ini_pars=[a, b]
def func(x, a, b):
return a*x b
plt.plot(x, func(x,a,b))
plt.show()
def fit(pars):
A,B = pars
res = (func(x,a, b)-func(x, *pars))**2
s=sum(res)
return s
bnds=[(0.1,0.5),(1,5)]
x0=[0.1,4]
opt = minimize(fit, x0, bounds=bnds)
new_pars=[opt.x[0], opt.x[0]]
example = fit(ini_pars)
print(example)
example = fit(new_pars)
print(example)
print(new_pars)
plt.plot(x, func(x, *ini_pars))
plt.plot(x, func(x, *new_pars))
plt.show()
```[enter image description here][1]
[1]: https://i.stack.imgur.com/qc1Nu.png
CodePudding user response:
It should be new_pars=[opt.x[0], opt.x[1]]
instead of new_pars=[opt.x[0], opt.x[0]]
. Note also that you can directly extract the values by new_pars = opt.x
.