Multivariate curve fitting Python

A common use of least-squares minimization is curve fitting, where one has a parametrized model function meant to explain some phenomena and wants to adjust the numerical values for the model so that it most closely matches some data. With

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
8, such problems are typically solved with scipy.optimize.curve_fit, which is a wrapper around scipy.optimize.leastsq. Since lmfit’s is also a high-level wrapper around scipy.optimize.leastsq it can be used for curve-fitting problems. While it offers many benefits over scipy.optimize.leastsq, using for many curve-fitting problems still requires more effort than using scipy.optimize.curve_fit.

The class in lmfit provides a simple and flexible approach to curve-fitting problems. Like scipy.optimize.curve_fit, a uses a model function – a function that is meant to calculate a model for some phenomenon – and then uses that to best match an array of supplied data. Beyond that similarity, its interface is rather different from scipy.optimize.curve_fit, for example in that it uses , but also offers several other important advantages.

In addition to allowing you to turn any model function into a curve-fitting method, lmfit also provides canonical definitions for many known lineshapes such as Gaussian or Lorentzian peaks and Exponential decays that are widely used in many scientific domains. These are available in the

x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
4 module that will be discussed in more detail in the next chapter (). We mention it here as you may want to consult that list before writing your own model. For now, we focus on turning Python functions into high-level fitting models with the class, and using these to fit data.

Motivation and simple example: Fit data to Gaussian profile

Let’s start with a simple and common example of fitting data to a Gaussian peak. As we will see, there is a built-in

x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
6 class that can help do this, but here we’ll build our own. We start with a simple definition of the model function:

from numpy import exp, linspace, random

def gaussian(x, amp, cen, wid):
    return amp * exp(-(x-cen)**2 / wid)

We want to use this function to fit to data \(y(x)\) represented by the arrays

x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
7 and
x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
8. With scipy.optimize.curve_fit, this would be:

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)

That is, we create data, make an initial guess of the model values, and run scipy.optimize.curve_fit with the model function, data arrays, and initial guesses. The results returned are the optimal values for the parameters and the covariance matrix. It’s simple and useful, but it misses the benefits of lmfit.

With lmfit, we create a that wraps the

y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
0 model function, which automatically generates the appropriate residual function, and determines the corresponding parameter names from the function signature itself:

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']

As you can see, the Model

y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
1 determined the names of the parameters and the independent variables. By default, the first argument of the function is taken as the independent variable, held in , and the rest of the functions positional arguments (and, in certain cases, keyword arguments – see below) are used for Parameter names. Thus, for the
y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
0 function above, the independent variable is
x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
8, and the parameters are named
y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
5,
y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
6, and
y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
7, and – all taken directly from the signature of the model function. As we will see below, you can modify the default assignment of independent variable / arguments and specify yourself what the independent variable is and which function arguments should be identified as parameter names.

The Parameters are not created when the model is created. The model knows what the parameters should be named, but nothing about the scale and range of your data. You will normally have to make these parameters and assign initial values and other attributes. To help you do this, each model has a

y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
8 method that will generate parameters with the expected names:

params = gmodel.make_params()

This creates the but does not automatically give them initial values since it has no idea what the scale should be. You can set initial values for parameters with keyword arguments to

y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
8:

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)

or assign them (and other parameter properties) after the class has been created.

A has several methods associated with it. For example, one can use the

result = gmodel.fit(y, params, x=x)
3 method to evaluate the model or the
result = gmodel.fit(y, params, x=x)
4 method to fit data to this model with a
result = gmodel.fit(y, params, x=x)
5 object. Both of these methods can take explicit keyword arguments for the parameter values. For example, one could use
result = gmodel.fit(y, params, x=x)
3 to calculate the predicted function:

x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)

or with:

y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)

Admittedly, this a slightly long-winded way to calculate a Gaussian function, given that you could have called your

y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
0 function directly. But now that the model is set up, we can use its
result = gmodel.fit(y, params, x=x)
4 method to fit this model to data, as with:

result = gmodel.fit(y, params, x=x)

or with:

result = gmodel.fit(y, x=x, cen=0.5, amp=10, wid=2.0)

Putting everything together, included in the

result = gmodel.fit(y, params, x=x)
9 folder with the source code, is:

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
0

which is pretty compact and to the point. The returned

result = gmodel.fit(y, x=x, cen=0.5, amp=10, wid=2.0)
0 will be a object. As we will see below, this has many components, including a
result = gmodel.fit(y, x=x, cen=0.5, amp=10, wid=2.0)
2 method, which will show:

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
1

As the script shows, the result will also have for the fit with the initial parameter values and a for the fit with the best fit parameter values. These can be used to generate the following plot:

which shows the data in blue dots, the best fit as a solid green line, and the initial fit as a dashed orange line.

Note that the model fitting was really performed with:

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
2

These lines clearly express that we want to turn the

y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
0 function into a fitting model, and then fit the \(y(x)\) data to this model, starting with values of 5 for
y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
5, 5 for
y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
6 and 1 for
y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
7. In addition, all the other features of lmfit are included: can have bounds and constraints and the result is a rich object that can be reused to explore the model fit in detail.

The class

The class provides a general way to wrap a pre-defined function as a fitting model.

class Model(func, independent_vars=None, param_names=None, nan_policy='raise', prefix='', name=None, **kws)

Create a model from a user-supplied model function.

The model function will normally take an independent variable (generally, the first argument) and a series of arguments that are meant to be parameters for the model. It will return an array of data to model some data as for a curve-fitting problem.

Parameters:
  • func (callable) – Function to be wrapped.

  • independent_vars ( of , optional) – Arguments to func that are independent variables (default is None).

  • param_names ( of , optional) – Names of arguments to func that are to be made into parameters (default is None).

  • nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and missing values in data. See Notes below.

  • prefix (, optional) – Prefix used for the model.

  • name (, optional) – Name for the model. When None (default) the name is the same as the model function (func).

  • **kws (, optional) – Additional keyword arguments to pass to model function.

Notes

1. Parameter names are inferred from the function arguments, and a residual function is automatically constructed.

2. The model function must return an array that will be the same size as the data being modeled.

3. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

Examples

The model function will normally take an independent variable (generally, the first argument) and a series of arguments that are meant to be parameters for the model. Thus, a simple peak using a Gaussian defined as:

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
3

can be turned into a Model with:

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
4

this will automatically discover the names of the independent variables and parameters:

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
5

class Methods

Model.eval(params=None, **kwargs)

Evaluate the model with supplied parameters and keyword arguments.

Parameters:
  • params (, optional) – Parameters to use in Model.

  • **kwargs (optional) – Additional keyword arguments to pass to model function.

Returns:

Value of model given the parameters and other arguments.

Return type:

, , or

Notes

1. if params is None, the values for all parameters are expected to be provided as keyword arguments. If params is given, and a keyword argument for a parameter value is also given, the keyword argument will be used.

2. all non-parameter arguments for the model function, including all the independent variables will need to be passed in using keyword arguments.

3. The return type depends on the model function. For many of the built-models it is a numpy.ndarray, with the exception of ConstantModel and ComplexConstantModel, which return a float/int or complex value.

Model.fit(data, params=None, weights=None, method='leastsq', iter_cb=None, scale_covar=True, verbose=False, fit_kws=None, nan_policy=None, calc_covar=True, max_nfev=None, **kwargs)

Fit the model to the data using the supplied Parameters.

Parameters:
  • data (array_like) – Array of data to be fit.

  • params (, optional) – Parameters to use in fit (default is None).

  • weights (array_like, optional) – Weights to use for the calculation of the fit residual [i.e., weights*(data-fit)]. Default is None; must have the same size as data.

  • method (, optional) – Name of fitting method to use (default is ‘leastsq’).

  • iter_cb (callable, optional) – Callback function to call at each iteration (default is None).

  • scale_covar (, optional) – Whether to automatically scale the covariance matrix when calculating uncertainties (default is True).

  • verbose (, optional) – Whether to print a message when a new parameter is added because of a hint (default is True).

  • fit_kws (, optional) – Options to pass to the minimizer being used.

  • nan_policy ({'raise', 'propagate', 'omit'}, optional) – What to do when encountering NaNs when fitting Model.

  • calc_covar (, optional) – Whether to calculate the covariance matrix (default is True) for solvers other than ‘leastsq’ and ‘least_squares’. Requires the

    from scipy.optimize import curve_fit
    
    x = linspace(-10, 10, 101)
    y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
    
    init_vals = [1, 0, 1]  # for [amp, cen, wid]
    best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
    
    07 package to be installed.

  • max_nfev ( or None, optional) – Maximum number of function evaluations (default is None). The default value depends on the fitting method.

  • **kwargs (optional) – Arguments to pass to the model function, possibly overriding parameters.

Return type:

Notes

1. if params is None, the values for all parameters are expected to be provided as keyword arguments. If params is given, and a keyword argument for a parameter value is also given, the keyword argument will be used.

2. all non-parameter arguments for the model function, including all the independent variables will need to be passed in using keyword arguments.

3. Parameters (however passed in), are copied on input, so the original Parameter objects are unchanged, and the updated values are in the returned ModelResult.

Examples

Take

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
08 to be the independent variable and data to be the curve we will fit. Use keyword arguments to set initial guesses:

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
6

Or, for more control, pass a Parameters object.

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
7

Keyword arguments override Parameters.

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
8

Model.guess(data, x, **kws)

Guess starting values for the parameters of a Model.

This is not implemented for all models, but is available for many of the built-in models.

Parameters:
  • data (array_like) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (array_like) – Array of values for the independent variable (i.e., x-values).

  • **kws (optional) – Additional keyword arguments, passed to model function.

Returns:

Initial, guessed values for the parameters of a Model.

Return type:

Raises:

– If the guess method is not implemented for a Model.

Notes

Should be implemented for each model subclass to run self.make_params(), update starting values and return a Parameters object.

Changed in version 1.0.3: Argument

x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
8 is now explicitly required to estimate starting values.

Model.make_params(verbose=False, **kwargs)

Create a Parameters object for a Model.

Parameters:
  • verbose (, optional) – Whether to print out messages (default is False).

  • **kwargs (optional) – Parameter names and initial values.

Returns:

params – Parameters object for the Model.

Return type:

Notes

1. The parameters may or may not have decent initial values for each parameter.

2. This applies any default values or parameter hints that may have been set.

Model.set_param_hint(name, **kwargs)

Set hints to use when creating parameters with make_params().

This is especially convenient for setting initial values. The name can include the models prefix or not. The hint given can also include optional bounds and constraints

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
10, which will be used by make_params() when building default parameters.

Parameters:
  • name () – Parameter name.

  • **kwargs (optional) –

    Arbitrary keyword arguments, needs to be a Parameter attribute. Can be any of the following:

    • valuefloat, optional

      Numerical Parameter value.

    • varybool, optional

      Whether the Parameter is varied during a fit (default is True).

    • minfloat, optional

      Lower bound for value (default is

      from scipy.optimize import curve_fit
      
      x = linspace(-10, 10, 101)
      y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
      
      init_vals = [1, 0, 1]  # for [amp, cen, wid]
      best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
      
      11, no lower bound).

    • maxfloat, optional

      Upper bound for value (default is

      from scipy.optimize import curve_fit
      
      x = linspace(-10, 10, 101)
      y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
      
      init_vals = [1, 0, 1]  # for [amp, cen, wid]
      best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
      
      12, no upper bound).

    • exprstr, optional

      Mathematical expression used to constrain the value during the fit.

Example

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
9

See .

Model.print_param_hints(colwidth=8)

Print a nicely aligned text-table of parameter hints.

Parameters:

colwidth (, optional) – Width of each column, except for first and last columns.

class Attributes

func

The model function used to calculate the model.

independent_vars

List of strings for names of the independent variables.

nan_policy

Describes what to do for NaNs that indicate missing values in the data. The choices are:

  • from scipy.optimize import curve_fit
    
    x = linspace(-10, 10, 101)
    y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
    
    init_vals = [1, 0, 1]  # for [amp, cen, wid]
    best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
    
    14: Raise a
    from scipy.optimize import curve_fit
    
    x = linspace(-10, 10, 101)
    y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
    
    init_vals = [1, 0, 1]  # for [amp, cen, wid]
    best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
    
    15 (default)

  • from scipy.optimize import curve_fit
    
    x = linspace(-10, 10, 101)
    y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
    
    init_vals = [1, 0, 1]  # for [amp, cen, wid]
    best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
    
    16: Do not check for NaNs or missing values. The fit will try to ignore them.

  • from scipy.optimize import curve_fit
    
    x = linspace(-10, 10, 101)
    y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
    
    init_vals = [1, 0, 1]  # for [amp, cen, wid]
    best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
    
    17: Remove NaNs or missing observations in data. If pandas is installed, is used, otherwise
    from scipy.optimize import curve_fit
    
    x = linspace(-10, 10, 101)
    y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
    
    init_vals = [1, 0, 1]  # for [amp, cen, wid]
    best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
    
    19 is used.

name

Name of the model, used only in the string representation of the model. By default this will be taken from the model function.

opts

Extra keyword arguments to pass to model function. Normally this will be determined internally and should not be changed.

param_hints

Dictionary of parameter hints. See .

param_names

List of strings of parameter names.

prefix

Prefix used for name-mangling of parameter names. The default is

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
20. If a particular has arguments
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
22,
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
23, and
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
24, these would become the parameter names. Using a prefix of
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
25 would convert these parameter names to
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
26,
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
27, and
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
28. This can be essential to avoid name collision in composite models.

Determining parameter names and independent variables for a function

The created from the supplied function

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
30 will create a object, and names are inferred from the function arguments, and a residual function is automatically constructed.

By default, the independent variable is taken as the first argument to the function. You can, of course, explicitly set this, and will need to do so if the independent variable is not first in the list, or if there is actually more than one independent variable.

If not specified, Parameters are constructed from all positional arguments and all keyword arguments that have a default value that is numerical, except the independent variable, of course. Importantly, the Parameters can be modified after creation. In fact, you will have to do this because none of the parameters have valid initial values. In addition, one can place bounds and constraints on Parameters, or fix their values.

Explicitly specifying y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0) 2

As we saw for the Gaussian example above, creating a from a function is fairly easy. Let’s try another one:

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
0

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
1

Here,

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
08 is assumed to be the independent variable because it is the first argument to the function. The other function arguments are used to create parameters for the model.

If you want

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
35 to be the independent variable in the above example, you can say so:

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
2

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
3

You can also supply multiple values for multi-dimensional functions with multiple independent variables. In fact, the meaning of independent variable here is simple, and based on how it treats arguments of the function you are modeling:

independent variable

A function argument that is not a parameter or otherwise part of the model, and that will be required to be explicitly provided as a keyword argument for each fit with or evaluation with .

Note that independent variables are not required to be arrays, or even floating point numbers.

Functions with keyword arguments

If the model function had keyword parameters, these would be turned into Parameters if the supplied default value was a valid number (but not

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
38,
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
39, or
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
40).

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
4

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
5

Here, even though

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
41 is a keyword argument to the function, it is turned into a parameter, with the default numerical value as its initial value. By default, it is permitted to be varied in the fit – the 10 is taken as an initial value, not a fixed value. On the other hand, the
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
42 keyword argument, was not converted to a parameter because it has a boolean default value. In some sense,
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
42 becomes like an independent variable to the model. However, because it has a default value it is not required to be given for each model evaluation or fit, as independent variables are.

Defining a from scipy.optimize import curve_fit x = linspace(-10, 10, 101) y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size) init_vals = [1, 0, 1] # for [amp, cen, wid] best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals) 44 for the Parameters

As we will see in the next chapter when combining models, it is sometimes necessary to decorate the parameter names in the model, but still have them be correctly used in the underlying model function. This would be necessary, for example, if two parameters in a composite model (see or examples in the next chapter) would have the same name. To avoid this, we can add a

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
44 to the which will automatically do this mapping for us.

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
6

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
7

You would refer to these parameters as

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
47 and so forth, and the model will know to map these to the
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
22 argument of
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
49.

Initializing model parameters

As mentioned above, the parameters created by are generally created with invalid initial values of

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
38. These values must be initialized in order for the model to be evaluated or used in a fit. There are four different ways to do this initialization that can be used in any combination:

  1. You can supply initial values in the definition of the model function.

  2. You can initialize the parameters when creating parameters with .

  3. You can give parameter hints with .

  4. You can supply initial values for the parameters when you use the or methods.

Of course these methods can be mixed, allowing you to overwrite initial values at any point in the process of defining and using the model.

Initializing values in the function definition

To supply initial values for parameters in the definition of the model function, you can simply supply a default value:

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
8

instead of using:

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
9

This has the advantage of working at the function level – all parameters with keywords can be treated as options. It also means that some default initial value will always be available for the parameter.

Initializing values with

When creating parameters with you can specify initial values. To do this, use keyword arguments for the parameter names and initial values:

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
0

Initializing values by setting parameter hints

After a model has been created, but prior to creating parameters with , you can set parameter hints. These allows you to set not only a default initial value but also to set other parameter attributes controlling bounds, whether it is varied in the fit, or a constraint expression. To set a parameter hint, you can use , as with:

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
1

Parameter hints are discussed in more detail in section .

Initializing values when using a model

Finally, you can explicitly supply initial values when using a model. That is, as with , you can include values as keyword arguments to either the or methods:

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
2

These approaches to initialization provide many opportunities for setting initial values for parameters. The methods can be combined, so that you can set parameter hints but then change the initial value explicitly with .

Using parameter hints

After a model has been created, you can give it hints for how to create parameters with . This allows you to set not only a default initial value but also to set other parameter attributes controlling bounds, whether it is varied in the fit, or a constraint expression. To set a parameter hint, you can use , as with:

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
3

Parameter hints are stored in a model’s attribute, which is simply a nested dictionary:

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
4

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
5

You can change this dictionary directly, or with the method. Either way, these parameter hints are used by when making parameters.

An important feature of parameter hints is that you can force the creation of new parameters with parameter hints. This can be useful to make derived parameters with constraint expressions. For example to get the full-width at half maximum of a Gaussian model, one could use a parameter hint of:

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
6

Saving and Loading Models

New in version 0.9.8.

It is sometimes desirable to save a for later use outside of the code used to define the model. Lmfit provides a function that will save a to a file. There is also a companion function that can read this file and reconstruct a from it.

Saving a model turns out to be somewhat challenging. The main issue is that Python is not normally able to serialize a function (such as the model function making up the heart of the Model) in a way that can be reconstructed into a callable Python object. The

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
74 package can sometimes serialize functions, but with the limitation that it can be used only in the same version of Python. In addition, class methods used as model functions will not retain the rest of the class attributes and methods, and so may not be usable. With all those warnings, it should be emphasized that if you are willing to save or reuse the definition of the model function as Python code, then saving the Parameters and rest of the components that make up a model presents no problem.

If the

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
74 package is installed, the model function will also be saved using it. But because saving the model function is not always reliable, saving a model will always save the name of the model function. The takes an optional
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
77 argument that can contain a dictionary of function definitions with the function names as keys and function objects as values. If one of the dictionary keys matches the saved name, the corresponding function object will be used as the model function. If it is not found by name, and if
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
74 was used to save the model, and if
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
74 is available at run-time, the
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
74-encoded function will try to be used. Note that this approach will generally allow you to save a model that can be used by another installation of the same version of Python, but may not work across Python versions. For preserving fits for extended periods of time (say, archiving for documentation of scientific results), we strongly encourage you to save the full Python code used for the model function and fit process.

save_model(model, fname)

Save a Model to a file.

Parameters:
  • model () – Model to be saved.

  • fname () – Name of file for saved Model.

load_model(fname, funcdefs=None)

Load a saved Model from a file.

Parameters:
  • fname () – Name of file containing saved Model.

  • funcdefs (, optional) – Dictionary of custom function names and definitions.

Returns:

Model object loaded from file.

Return type:

As a simple example, one can save a model as:

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
7

To load that later, one might do:

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
8

See also .

The class

A (which had been called

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
83 prior to version 0.9) is the object returned by . It is a subclass of , and so contains many of the fit results. Of course, it knows the and the set of used in the fit, and it has methods to evaluate the model, to fit the data (or re-fit the data with changes to the parameters, or fit with different or modified data) and to print out a report for that fit.

While a encapsulates your model function, it is fairly abstract and does not contain the parameters or data used in a particular fit. A does contain parameters and data as well as methods to alter and re-do fits. Thus the is the idealized model while the is the messier, more complex (but perhaps more useful) object that represents a fit with a set of parameters to data with a model.

A has several attributes holding values for fit results, and several methods for working with fits. These include statistics inherited from useful for comparing different models, including

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
94,
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
95,
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
96, and
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
97.

class ModelResult(model, params, data=None, weights=None, method='leastsq', fcn_args=None, fcn_kws=None, iter_cb=None, scale_covar=True, nan_policy='raise', calc_covar=True, max_nfev=None, **fit_kws)

Result from the Model fit.

This has many attributes and methods for viewing and working with the results of a fit using Model. It inherits from Minimizer, so that it can be used to modify and re-run the fit for the Model.

Parameters:
  • model () – Model to use.

  • params () – Parameters with initial values for model.

  • data (array_like, optional) – Data to be modeled.

  • weights (array_like, optional) – Weights to multiply

    from scipy.optimize import curve_fit
    
    x = linspace(-10, 10, 101)
    y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
    
    init_vals = [1, 0, 1]  # for [amp, cen, wid]
    best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
    
    98 for fit residual.

  • method (, optional) – Name of minimization method to use (default is ‘leastsq’).

  • fcn_args (sequence, optional) – Positional arguments to send to model function.

  • fcn_dict (, optional) – Keyword arguments to send to model function.

  • iter_cb (callable, optional) – Function to call on each iteration of fit.

  • scale_covar (, optional) – Whether to scale covariance matrix for uncertainty evaluation.

  • nan_policy ({'raise', 'propagate', 'omit'}, optional) – What to do when encountering NaNs when fitting Model.

  • calc_covar (, optional) – Whether to calculate the covariance matrix (default is True) for solvers other than ‘leastsq’ and ‘least_squares’. Requires the

    from scipy.optimize import curve_fit
    
    x = linspace(-10, 10, 101)
    y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
    
    init_vals = [1, 0, 1]  # for [amp, cen, wid]
    best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
    
    07 package to be installed.

  • max_nfev ( or None, optional) – Maximum number of function evaluations (default is None). The default value depends on the fitting method.

  • **fit_kws (optional) – Keyword arguments to send to minimization routine.

methods

ModelResult.eval(params=None, **kwargs)

Evaluate model function.

Parameters:
  • params (, optional) – Parameters to use.

  • **kwargs (optional) – Options to send to Model.eval().

Returns:

Array or value for the evaluated model.

Return type:

, , , or

ModelResult.eval_components(params=None, **kwargs)

Evaluate each component of a composite model function.

Parameters:
  • params (, optional) – Parameters, defaults to ModelResult.params.

  • **kwargs (optional) – Keyword arguments to pass to model function.

Returns:

Keys are prefixes of component models, and values are the estimated model value for each component of the model.

Return type:

ModelResult.fit(data=None, params=None, weights=None, method=None, nan_policy=None, **kwargs)

Re-perform fit for a Model, given data and params.

Parameters:
  • data (array_like, optional) – Data to be modeled.

  • params (, optional) – Parameters with initial values for model.

  • weights (array_like, optional) – Weights to multiply

    from scipy.optimize import curve_fit
    
    x = linspace(-10, 10, 101)
    y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
    
    init_vals = [1, 0, 1]  # for [amp, cen, wid]
    best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
    
    98 for fit residual.

  • method (, optional) – Name of minimization method to use (default is ‘leastsq’).

  • nan_policy ({'raise', 'propagate', 'omit'}, optional) – What to do when encountering NaNs when fitting Model.

  • **kwargs (optional) – Keyword arguments to send to minimization routine.

ModelResult.fit_report(modelpars=None, show_correl=True, min_correl=0.1, sort_pars=False)

Return a printable fit report.

The report contains fit statistics and best-fit values with uncertainties and correlations.

Parameters:
  • modelpars (, optional) – Known Model Parameters.

  • show_correl (, optional) – Whether to show list of sorted correlations (default is True).

  • min_correl (, optional) – Smallest correlation in absolute value to show (default is 0.1).

  • sort_pars (callable, optional) – Whether to show parameter names sorted in alphanumerical order (default is False). If False, then the parameters will be listed in the order as they were added to the Parameters dictionary. If callable, then this (one argument) function is used to extract a comparison key from each list element.

Returns:

Multi-line text of fit report.

Return type:

ModelResult.conf_interval(**kwargs)

Calculate the confidence intervals for the variable parameters.

Confidence intervals are calculated using the

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
02 function and keyword arguments (**kwargs) are passed to that function. The result is stored in the attribute so that it can be accessed without recalculating them.

ModelResult.ci_report(with_offset=True, ndigits=5, **kwargs)

Return a formatted text report of the confidence intervals.

Parameters:
  • with_offset (, optional) – Whether to subtract best value from all other values (default is True).

  • ndigits (, optional) – Number of significant digits to show (default is 5).

  • **kwargs (optional) – Keyword arguments that are passed to the conf_interval function.

Returns:

Text of formatted report on confidence intervals.

Return type:

ModelResult.eval_uncertainty(params=None, sigma=1, **kwargs)

Evaluate the uncertainty of the model function.

This can be used to give confidence bands for the model from the uncertainties in the best-fit parameters.

Parameters:
  • params (, optional) – Parameters, defaults to ModelResult.params.

  • sigma (, optional) – Confidence level, i.e. how many sigma (default is 1).

  • **kwargs (optional) – Values of options, independent variables, etcetera.

Returns:

Uncertainty at each value of the model.

Return type:

Notes

  1. This is based on the excellent and clear example from , which references the original work of: J. Wolberg, Data Analysis Using the Method of Least Squares, 2006, Springer

  2. The value of sigma is number of sigma values, and is converted to a probability. Values of 1, 2, or 3 give probabilities of 0.6827, 0.9545, and 0.9973, respectively. If the sigma value is < 1, it is interpreted as the probability itself. That is,

    from lmfit import Model
    
    gmodel = Model(gaussian)
    print(f'parameter names: {gmodel.param_names}')
    print(f'independent variables: {gmodel.independent_vars}')
    
    04 and
    from lmfit import Model
    
    gmodel = Model(gaussian)
    print(f'parameter names: {gmodel.param_names}')
    print(f'independent variables: {gmodel.independent_vars}')
    
    05 will give the same results, within precision errors.

  3. Also sets attributes of dely for the uncertainty of the model (which will be the same as the array returned by this method) and dely_comps, a dictionary of dely for each component.

Examples

parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
9

ModelResult.plot(datafmt='o', fitfmt='-', initfmt='--', xlabel=None, ylabel=None, yerr=None, numpoints=None, fig=None, data_kws=None, fit_kws=None, init_kws=None, ax_res_kws=None, ax_fit_kws=None, fig_kws=None, show_init=False, parse_complex='abs', title=None)

Plot the fit results and residuals using matplotlib.

The method will produce a matplotlib figure (if package available) with both results of the fit and the residuals plotted. If the fit model included weights, errorbars will also be plotted. To show the initial conditions for the fit, pass the argument

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
06.

Parameters:
  • datafmt (, optional) – Matplotlib format string for data points.

  • fitfmt (, optional) – Matplotlib format string for fitted curve.

  • initfmt (, optional) – Matplotlib format string for initial conditions for the fit.

  • xlabel (, optional) – Matplotlib format string for labeling the x-axis.

  • ylabel (, optional) – Matplotlib format string for labeling the y-axis.

  • yerr (, optional) – Array of uncertainties for data array.

  • numpoints (, optional) – If provided, the final and initial fit curves are evaluated not only at data points, but refined to contain numpoints points in total.

  • fig (, optional) – The figure to plot on. The default is None, which means use the current pyplot figure or create one if there is none.

  • data_kws (, optional) – Keyword arguments passed to the plot function for data points.

  • fit_kws (, optional) – Keyword arguments passed to the plot function for fitted curve.

  • init_kws (, optional) – Keyword arguments passed to the plot function for the initial conditions of the fit.

  • ax_res_kws (, optional) – Keyword arguments for the axes for the residuals plot.

  • ax_fit_kws (, optional) – Keyword arguments for the axes for the fit plot.

  • fig_kws (, optional) – Keyword arguments for a new figure, if a new one is created.

  • show_init (, optional) – Whether to show the initial conditions for the fit (default is False).

  • parse_complex ({'abs', 'real', 'imag', 'angle'}, optional) – How to reduce complex data for plotting. Options are one of: ‘abs’ (default), ‘real’, ‘imag’, or ‘angle’, which correspond to the NumPy functions with the same name.

  • title (, optional) – Matplotlib format string for figure title.

Return type:

See also

Plot the fit results using matplotlib.

Plot the fit residuals using matplotlib.

Notes

The method combines ModelResult.plot_fit and ModelResult.plot_residuals.

If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is not specified and the fit includes weights, yerr set to

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
09.

If model returns complex data, yerr is treated the same way that weights are in this case.

If fig is None then matplotlib.pyplot.figure(**fig_kws) is called, otherwise fig_kws is ignored.

ModelResult.plot_fit(ax=None, datafmt='o', fitfmt='-', initfmt='--', xlabel=None, ylabel=None, yerr=None, numpoints=None, data_kws=None, fit_kws=None, init_kws=None, ax_kws=None, show_init=False, parse_complex='abs', title=None)

Plot the fit results using matplotlib, if available.

The plot will include the data points, the initial fit curve (optional, with

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
06), and the best-fit curve. If the fit model included weights or if yerr is specified, errorbars will also be plotted.

Parameters:
  • ax (, optional) – The axes to plot on. The default in None, which means use the current pyplot axis or create one if there is none.

  • datafmt (, optional) – Matplotlib format string for data points.

  • fitfmt (, optional) – Matplotlib format string for fitted curve.

  • initfmt (, optional) – Matplotlib format string for initial conditions for the fit.

  • xlabel (, optional) – Matplotlib format string for labeling the x-axis.

  • ylabel (, optional) – Matplotlib format string for labeling the y-axis.

  • yerr (, optional) – Array of uncertainties for data array.

  • numpoints (, optional) – If provided, the final and initial fit curves are evaluated not only at data points, but refined to contain numpoints points in total.

  • data_kws (, optional) – Keyword arguments passed to the plot function for data points.

  • fit_kws (, optional) – Keyword arguments passed to the plot function for fitted curve.

  • init_kws (, optional) – Keyword arguments passed to the plot function for the initial conditions of the fit.

  • ax_kws (, optional) – Keyword arguments for a new axis, if a new one is created.

  • show_init (, optional) – Whether to show the initial conditions for the fit (default is False).

  • parse_complex ({'abs', 'real', 'imag', 'angle'}, optional) – How to reduce complex data for plotting. Options are one of: ‘abs’ (default), ‘real’, ‘imag’, or ‘angle’, which correspond to the NumPy functions with the same name.

  • title (, optional) – Matplotlib format string for figure title.

Return type:

See also

Plot the fit residuals using matplotlib.

Plot the fit results and residuals using matplotlib.

Notes

For details about plot format strings and keyword arguments see documentation of matplotlib.axes.Axes.plot.

If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is not specified and the fit includes weights, yerr set to

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
09.

If model returns complex data, yerr is treated the same way that weights are in this case.

If ax is None then matplotlib.pyplot.gca(**ax_kws) is called.

ModelResult.plot_residuals(ax=None, datafmt='o', yerr=None, data_kws=None, fit_kws=None, ax_kws=None, parse_complex='abs', title=None)

Plot the fit residuals using matplotlib, if available.

If yerr is supplied or if the model included weights, errorbars will also be plotted.

Parameters:
  • ax (, optional) – The axes to plot on. The default in None, which means use the current pyplot axis or create one if there is none.

  • datafmt (, optional) – Matplotlib format string for data points.

  • yerr (, optional) – Array of uncertainties for data array.

  • data_kws (, optional) – Keyword arguments passed to the plot function for data points.

  • fit_kws (, optional) – Keyword arguments passed to the plot function for fitted curve.

  • ax_kws (, optional) – Keyword arguments for a new axis, if a new one is created.

  • parse_complex ({'abs', 'real', 'imag', 'angle'}, optional) – How to reduce complex data for plotting. Options are one of: ‘abs’ (default), ‘real’, ‘imag’, or ‘angle’, which correspond to the NumPy functions with the same name.

  • title (, optional) – Matplotlib format string for figure title.

Return type:

See also

Plot the fit results using matplotlib.

Plot the fit results and residuals using matplotlib.

Notes

For details about plot format strings and keyword arguments see documentation of matplotlib.axes.Axes.plot.

If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is not specified and the fit includes weights, yerr set to

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
09.

If model returns complex data, yerr is treated the same way that weights are in this case.

If ax is None then matplotlib.pyplot.gca(**ax_kws) is called.

attributes

aic

Floating point best-fit Akaike Information Criterion statistic (see ).

best_fit

numpy.ndarray result of model function, evaluated at provided independent variables and with best-fit parameters.

best_values

Dictionary with parameter names as keys, and best-fit values as values.

bic

Floating point best-fit Bayesian Information Criterion statistic (see ).

chisqr

Floating point best-fit chi-square statistic (see ).

ci_out

Confidence interval data (see ) or

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
38 if the confidence intervals have not been calculated.

covar

numpy.ndarray (square) covariance matrix returned from fit.

data

numpy.ndarray of data to compare to model.

dely

numpy.ndarray of estimated uncertainties in the

x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
7 values of the model from (see ).

dely_comps

a dictionary of estimated uncertainties in the

x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
7 values of the model components, from (see ).

errorbars

Boolean for whether error bars were estimated by fit.

ier

Integer returned code from scipy.optimize.leastsq.

init_fit

numpy.ndarray result of model function, evaluated at provided independent variables and with initial parameters.

init_params

Initial parameters.

init_values

Dictionary with parameter names as keys, and initial values as values.

iter_cb

Optional callable function, to be called at each fit iteration. This must take take arguments of

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
23, where
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
24 will have the current parameter values,
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
25 the iteration,
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
26 the current residual array, and
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
27 and
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
28 as passed to the objective function. See .

jacfcn

Optional callable function, to be called to calculate Jacobian array.

lmdif_message

String message returned from scipy.optimize.leastsq.

message

String message returned from .

method

String naming fitting method for .

call_kws

Dict of keyword arguments actually send to underlying solver with .

model

Instance of used for model.

ndata

Integer number of data points.

nfev

Integer number of function evaluations used for fit.

nfree

Integer number of free parameters in fit.

nvarys

Integer number of independent, freely varying variables in fit.

params

Parameters used in fit; will contain the best-fit values.

redchi

Floating point reduced chi-square statistic (see ).

residual

numpy.ndarray for residual.

rsquared

Floating point \(R^2\) statisic, defined for data \(y\) and best-fit model \(f\) as

\begin{eqnarray*} R^2 &=& 1 - \frac{\sum_i (y_i - f_i)^2}{\sum_i (y_i - \bar{y})^2} \end{eqnarray*}

scale_covar

Boolean flag for whether to automatically scale covariance matrix.

success

Boolean value of whether fit succeeded.

weights

numpy.ndarray (or

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
38) of weighting values to be used in fit. If not
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
38, it will be used as a multiplicative factor of the residual array, so that
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
35 is minimized in the least-squares sense.

Calculating uncertainties in the model function

We return to the first example above and ask not only for the uncertainties in the fitted parameters but for the range of values that those uncertainties mean for the model function itself. We can use the method of the model result object to evaluate the uncertainty in the model with a specified level for \(\sigma\).

That is, adding:

params = gmodel.make_params()
0

to the example fit to the Gaussian at the beginning of this chapter will give 3-\(\sigma\) bands for the best-fit Gaussian, and produce the figure below.

New in version 1.0.4.

If the model is a composite built from multiple components, the method will evaluate the uncertainty of both the full model (often the sum of multiple components) as well as the uncertainty in each component. The uncertainty of the full model will be held in

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
38, and the uncertainties for each component will be held in the dictionary
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
39, with keys that are the component prefixes.

An example script shows how the uncertainties in components of a composite model can be calculated and used:

params = gmodel.make_params()
1

params = gmodel.make_params()
2

Saving and Loading ModelResults

New in version 0.9.8.

As with saving models (see section ), it is sometimes desirable to save a , either for later use or to organize and compare different fit results. Lmfit provides a function that will save a to a file. There is also a companion function that can read this file and reconstruct a from it.

As discussed in section , there are challenges to saving model functions that may make it difficult to restore a saved a in a way that can be used to perform a fit. Use of the optional

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
77 argument is generally the most reliable way to ensure that a loaded can be used to evaluate the model function or redo the fit.

save_modelresult(modelresult, fname)

Save a ModelResult to a file.

Parameters:
  • modelresult () – ModelResult to be saved.

  • fname () – Name of file for saved ModelResult.

load_modelresult(fname, funcdefs=None)

Load a saved ModelResult from a file.

Parameters:
  • fname () – Name of file containing saved ModelResult.

  • funcdefs (, optional) – Dictionary of custom function names and definitions.

Returns:

ModelResult object loaded from file.

Return type:

An example of saving a is:

params = gmodel.make_params()
3

To load that later, one might do:

params = gmodel.make_params()
4

Composite Models : adding (or multiplying) Models

One of the more interesting features of the class is that Models can be added together or combined with basic algebraic operations (add, subtract, multiply, and divide) to give a composite model. The composite model will have parameters from each of the component models, with all parameters being available to influence the whole model. This ability to combine models will become even more useful in the next chapter, when pre-built subclasses of are discussed. For now, we’ll consider a simple example, and build a model of a Gaussian plus a line, as to model a peak with a background. For such a simple problem, we could just build a model that included both components:

params = gmodel.make_params()
5

and use that with:

params = gmodel.make_params()
6

But we already had a function for a gaussian function, and maybe we’ll discover that a linear background isn’t sufficient which would mean the model function would have to be changed.

Instead, lmfit allows models to be combined into a . As an alternative to including a linear background in our model function, we could define a linear function:

params = gmodel.make_params()
7

and build a composite model with just:

params = gmodel.make_params()
8

This model has parameters for both component models, and can be used as:

params = gmodel.make_params()
9

which prints out the results:

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
0

and shows the plot on the left.

On the left, data is shown in blue dots, the total fit is shown in solid green line, and the initial fit is shown as a orange dashed line. The figure on the right shows again the data in blue dots, the Gaussian component as a orange dashed line and the linear component as a green dashed line. It is created using the following code:

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
1

The components were generated after the fit using the method of the

result = gmodel.fit(y, x=x, cen=0.5, amp=10, wid=2.0)
0, which returns a dictionary of the components, using keys of the model name (or
from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
44 if that is set). This will use the parameter values in
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
55 and the independent variables (
x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
8) used during the fit. Note that while the held in
result = gmodel.fit(y, x=x, cen=0.5, amp=10, wid=2.0)
0 does store the best parameters and the best estimate of the model in
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
59, the original model and parameters in
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
60 are left unaltered.

You can apply this composite model to other data sets, or evaluate the model at other values of

x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
8. You may want to do this to give a finer or coarser spacing of data point, or to extrapolate the model outside the fitting range. This can be done with:

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
2

In this example, the argument names for the model functions do not overlap. If they had, the

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

init_vals = [1, 0, 1]  # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
44 argument to would have allowed us to identify which parameter went with which component model. As we will see in the next chapter, using composite models with the built-in models provides a simple way to build up complex models.

class CompositeModel(left, right, op[, **kws])

Combine two models (left and right) with binary operator (op).

Normally, one does not have to explicitly create a CompositeModel, but can use normal Python operators

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
64,
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
65,
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
66, and
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
67 to combine components as in:

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
3

Parameters:
  • left () – Left-hand model.

  • right () – Right-hand model.

  • op (callable binary operator) – Operator to combine left and right models.

  • **kws (optional) – Additional keywords are passed to Model when creating this new model.

Notes

The two models can use different independent variables.

Note that when using built-in Python binary operators, a will automatically be constructed for you. That is, doing:

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
4

will create a . Here,

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
70 will be
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
71,
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
72 will be
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
73, and
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
74 will be another CompositeModel that has a
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
70 attribute of
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
76, an
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
72 of
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
78, and a
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
74 of
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
80.

To use a binary operator other than

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
64,
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
65,
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
66, or
from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
67 you can explicitly create a with the appropriate binary operator. For example, to convolve two models, you could define a simple convolution function, perhaps as:

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
5

which extends the data in both directions so that the convolving kernel function gives a valid result over the data range. Because this function takes two array arguments and returns an array, it can be used as the binary operator. A full script using this technique is here:

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
6

which prints out the results:

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
7

and shows the plots:

Using composite models with built-in or custom operators allows you to build complex models from testable sub-components.

How do you fit data to a curve in Python?

Data fitting.
Import the curve_fit function from scipy..
Create a list or numpy array of your independent variable (your x values). ... .
Create a list of numpy array of your depedent variables (your y values). ... .
Create a function for the equation you want to fit. ... .
Use the function curve_fit to fit your data..

What is the difference between Polyfit and curve_fit in Python?

The function can be polynomial, exponential logarithmic or any other suitable equation. However, the polyfit() function is only used to fit a polynomial to some data. The other difference is that the method used by the curve_fit() function is non-linear least squares, while polyfit() is a least-square polynomial fit.

How to do least squares curve fitting in Python?

To get the least-squares fit of a polynomial to data, use the polynomial. polyfit() in Python Numpy. The method returns the Polynomial coefficients ordered from low to high. If y was 2-D, the coefficients in column k of coef represent the polynomial fit to the data in y's k-th column.

What is the difference between Scipy curve_fit and Least_squares?

There is no fundamental difference between curve_fit and least_squares . Moreover, if you don't use method = 'lm' they do exactly the same thing. You can check it in a source code of curve_fit fucntion on a Github: if method == 'lm': ...