0

I'm trying to fit a exponential function by using scipy.optimize.curve_fit()(The example data and code as following). But it always shows a RuntimeError like this: RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 5000. I'm not sure where I'm going wrong.

import numpy as np
from scipy.optimize import curve_fit

x = np.arange(-1, 1, .01)
param1 = [-1, 2, 10, 100]
fit_func = lambda x, a, b, c, d: a * np.exp(b * x + c) + d
y = fit_func(x, *param1)
popt, _ = curve_fit(fit_func, x, y, maxfev=5000)
keeptg
  • 27
  • 6
  • Thanks for your reply. But, since`len(np.arange(-1, 1, .01))=200` , `len(y)` should also equal 200. So, there should be 200 points in this example. – keeptg Aug 22 '20 at 20:57
  • It could be that curve_fit has not been given enough time to find the solution. Curve_fit doesn't do anything sophisticate, it simplies searches based on inputs and outputs. No derivatives or anything. If you use a function that spcifically takes into consideration the derivative, it will find the parameters much faster. In fact, there are probably directly known functions for the exponential. – Bobby Ocean Aug 23 '20 at 00:42

1 Answers1

1

This is almost certainly due to the initial guess for the parameters.

You don't pass an initial guess to curve_fit, which means it defaults to a value of 1 for every parameter. Unfortunately, this is a terrible guess in your case. The function of interest is an exponential, one property of which is that the derivative is also an exponential. So all derivatives (first-order, second-order, etc) will be not just wrong, but have the wrong sign. This means the optimizer will have a very difficult time making progress.

You can solve this by giving the optimizer just a smidge of help. Since you know all your data is negative, you could just pass -1 as an initial guess for the first parameter (the scale or amplitude of the function). This alone is enough to for the optimizer to arrive at a reasonable guess.

p0 = (-1, 1, 1, 1)
popt, _ = curve_fit(x, y, p0=p0, maxfev=5000)
fig, ax = plt.subplots()
ax.plot(x, y, label="Data", color="k")
ax.plot(x, fit_func(x, *popt), color="r", linewidth=3.0, linestyle=":", label="Fitted")
fig.tight_layout()

You should see something like this:enter image description here

bnaecker
  • 6,152
  • 1
  • 20
  • 33
  • Thanks for you kind reply! Following your recommendation, I fitted the example data successfully. But for my real data, the result is still unreasonable. I tried to set different `p0`, but the result seems has no obvious improvement. It will be very appreciated if you can give some further suggestion. The example data is as following: `x = [3847.0, 3947.0, 4247.0, 4347.0, 4047.0, 4147.0]`, `y = [-5.025146484375, -6.69677734375, -1.90966796875, 1.6513671875, -3.9755859375, -1.6123046875]` – keeptg Aug 23 '20 at 09:01
  • According [Warren Weckesser's answer](https://stackoverflow.com/questions/21420792/exponential-curve-fitting-in-scipy/21421137), I tried again. The unreasonable results is still caused by the initial parameters. For the second example, the `fit_func` need a very small `b`. So I set `p0 = (-1, -1e-5, 1, 1)`, and get what I want. – keeptg Aug 23 '20 at 15:50
  • @keeptg Exponentials are unfortunately difficult to fit sometimes. There are many combinations of parameters which achieve similar shaped curves, which means many local minima for an optimizer to get stuck in. It looks like you need a small `b` because the `x` values are quite large; you could shift them closer to zero, fit the data, and then shift back, which might offer some improvement. But in any case, you seem to have found initial parameters which work :) – bnaecker Aug 23 '20 at 16:12