Sure, create an instance and pass its bound method:
class MyClass(object):
...
def model_fun(self, x, par): ...
obj = MyClass(...)
curve_fit(obj.model_fun, ...)
The model function, f(x, …). It must take the independent variable as the first argument and the parameters to fit as separate remaining arguments.,Assumes ydata = f(xdata, *params) + eps.,Determines the uncertainty in ydata. If we define residuals as r = ydata - f(xdata, *popt), then the interpretation of sigma depends on its number of dimensions:,The estimated covariance of popt. The diagonals provide the variance of the parameter estimate. To compute one standard deviation errors on the parameters use perr = np.sqrt(np.diag(pcov)).
>>>
import matplotlib.pyplot as plt >>>
from scipy.optimize
import curve_fit
>>> def func(x, a, b, c):
...
return a * np.exp(-b * x) + c
>>> xdata = np.linspace(0, 4, 50) >>>
y = func(xdata, 2.5, 1.3, 0.5) >>>
rng = np.random.default_rng() >>>
y_noise = 0.2 * rng.normal(size = xdata.size) >>>
ydata = y + y_noise >>>
plt.plot(xdata, ydata, 'b-', label = 'data')
>>> popt, pcov = curve_fit(func, xdata, ydata) >>>
popt
array([2.56274217, 1.37268521, 0.47427475]) >>>
plt.plot(xdata, func(xdata, * popt), 'r-',
...label = 'fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple(popt))
>>> popt, pcov = curve_fit(func, xdata, ydata, bounds = (0, [3., 1., 0.5])) >>>
popt
array([2.43736712, 1., 0.34463856]) >>>
plt.plot(xdata, func(xdata, * popt), 'g--',
...label = 'fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple(popt))
>>> plt.xlabel('x') >>>
plt.ylabel('y') >>>
plt.legend() >>>
plt.show()
The Model class in lmfit provides a simple and flexible approach to curve-fitting problems. Like scipy.optimize.curve_fit, a Model uses a model function – a function that is meant to calculate a model for some phenomenon – and then uses that to best match an array of supplied data. Beyond that similarity, its interface is rather different from scipy.optimize.curve_fit, for example in that it uses Parameters, but also offers several other important advantages.,The Model class provides a general way to wrap a pre-defined function as a fitting model.,It is sometimes desirable to save a Model for later use outside of the code used to define the model. Lmfit provides a save_model() function that will save a Model to a file. There is also a companion load_model() function that can read this file and reconstruct a Model from it.,That is, we create data, make an initial guess of the model values, and run scipy.optimize.curve_fit with the model function, data arrays, and initial guesses. The results returned are the optimal values for the parameters and the covariance matrix. It’s simple and useful, but it misses the benefits of lmfit.
from numpy
import exp, linspace, random
def gaussian(x, amp, cen, wid):
return amp * exp(-(x - cen) ** 2 / wid)
from scipy.optimize import curve_fit x = linspace(-10, 10, 101) y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size) init_vals = [1, 0, 1] # for [amp, cen, wid] best_vals, covar = curve_fit(gaussian, x, y, p0 = init_vals)
from lmfit
import Model
gmodel = Model(gaussian)
print(f 'parameter names: {gmodel.param_names}')
print(f 'independent variables: {gmodel.independent_vars}')
parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
params = gmodel.make_params()
params = gmodel.make_params(cen = 0.3, amp = 3, wid = 1.25)
The Model class in lmfit provides a simple and flexible approach to curve-fitting problems. Like scipy.optimize.curve_fit, a Model uses a model function – a function that is meant to calculate a model for some phenomenon – and then uses that to best match an array of supplied data. Beyond that similarity, its interface is rather different from scipy.optimize.curve_fit, for example in that it uses Parameters, but also offers several other important advantages.,The Model class provides a general way to wrap a pre-defined function as a fitting model.,It is sometimes desirable to save a Model for later use outside of the code used to define the model. Lmfit provides a save_model() function that will save a Model to a file. There is also a companion load_model() function that can read this file and reconstruct a Model from it.,That is, we create data, make an initial guess of the model values, and run scipy.optimize.curve_fit with the model function, data arrays, and initial guesses. The results returned are the optimal values for the parameters and the covariance matrix. It’s simple and useful, but it misses the benefits of lmfit.
from numpy
import exp, linspace, random
def gaussian(x, amp, cen, wid):
return amp * exp(-(x - cen) ** 2 / wid)
from scipy.optimize import curve_fit x = linspace(-10, 10, 101) y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size) init_vals = [1, 0, 1] # for [amp, cen, wid] best_vals, covar = curve_fit(gaussian, x, y, p0 = init_vals) print('best_vals: {}'.format(best_vals))
best_vals: [2.20240738 0.18069316 1.66433569]
from lmfit
import Model
gmodel = Model(gaussian)
print('parameter names: {}'.format(gmodel.param_names))
print('independent variables: {}'.format(gmodel.independent_vars))
parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
params = gmodel.make_params()
Here we discuss lmfit’s Model class. This takes a model function – a function that calculates a model for some data – and provides methods to create parameters for that model and to fit data using that model function. This is closer in spirit to scipy.optimize.curve_fit(), but with the advantages of using Parameters and lmfit.,Here, t is assumed to be the independent variable because it is the first argument to the function. The other function arguments are used to create parameters for the model.,ndarray result of model function, evaluated at provided independent variables and with best-fit parameters.,The Model allows us to easily wrap a model function such as the gaussian function. This automatically generate the appropriate residual function, and determines the corresponding parameter names from the function signature itself:
>>> from numpy
import sqrt, pi, exp, linspace
>>>
>>>
def gaussian(x, amp, cen, wid):
...
return amp * exp(-(x - cen) ** 2 / wid)
...
>>> from scipy.optimize import curve_fit >>> >>> x, y = read_data_from_somewhere(....) >>> >>> init_vals = [5, 5, 1] # for [amp, cen, wid] >>> best_vals, covar = curve_fit(gaussian, x, y, p0 = init_vals) >>> print best_vals
>>> from lmfit
import Model
>>>
gmod = Model(gaussian) >>>
gmod.param_names
set(['amp', 'wid', 'cen']) >>>
gmod.independent_vars)['x']
>>> params = gmod.make_params()
>>> params = gmod.make_params(cen = 5, amp = 200, wid = 1)
>>> x = linspace(0, 10, 201) >>>
y = gmod.eval(x = x, amp = 10, cen = 6.2, wid = 0.75)
Scipy has a powerful module for curve fitting which uses non linear least squares method to fit a function to our data. The documentation is available it https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html.,To use curve_fit for the above data, we need to define a linear function which will be used to find fitting. The output will be the slope (m) and intercept ( c ) for our data, along with any variance in these values and any fitting statistics ($R^2$ or $\chi^2$)., Introduction 1. Generate data for a linear fitting 2. Using Scipy’s curve_fit module What if we want the line to pass through origin (x=0,y=0)? In other words, what if we want the intercept to be zero? What if we want some statistics parameters ($R^2$)? 2. Using lmfit module Sometimes you need to look beyond just least squares ;) 3. Power law curve fitting 4. Transcendental function fitting ,I frequently use power law to study the variation of stiffness with stress and create constitutive laws for materials. Let’s see how to do a power fitting with scipy’s curve_fit and lmfit.
where, m is usually the slope of the line and c is the intercept when x = 0 and x (Time), y (Stress) is our data. The y axis data is usually the measured value during the experiment/simulation and we are trying to find how the y axis quantity is dependent on the x axis quantity.
# Importing numpy for creating data and matplotlib for plotting import numpy as np import matplotlib.pyplot as plt # Creating x axis data x = np.linspace(0, 10, 100) # Creating random data for y axis # Here the slope(m) is predefined to be 2.39645 # The intercept is 0 # random.normal method just adds some noise to data y = 2.39645 * x + np.random.normal(0, 2, 100) # Plotting the data plt.scatter(x, y, c = 'black') plt.xlabel('Time (sec)') plt.ylabel('Stress (kPa)') plt.show()
To use curve_fit for the above data, we need to define a linear function which will be used to find fitting. The output will be the slope (m) and intercept ( c ) for our data, along with any variance in these values and any fitting statistics ($R^2$ or $\chi^2$).
# Calling the scipy 's curve_fit function from optimize module from scipy.optimize import curve_fit # Defining a fitting fucntion def linear_fit(x, m, c): return m * x + c '' ' 1. Using the curve_fit function to fit the random linear data 2. Params returns an array with the best for values of the different fitting parameters.In our case first entry in params will be the slope m and second entry would be the intercept. 3. Covariance returns a matrix of covariance for our fitted parameters. 4. The first argument f is the defined fitting function. 5. xdata and ydata are the x and y data we generated above. '' ' params, covariance = curve_fit(f = linear_fit, xdata = x, ydata = y) print('Slope (m) is ', params[0]) print('Intercept (c) is ', params[1]) print(covariance)
Slope(m) is 2.430364409972301
Intercept(c) is - 0.11571245549643683[[0.00434152 - 0.02170757]
[-0.02170757 0.14544807]]
Although covariance matrix provides information about the variance of the fitted parameters, mostly everyone uses the standard deviation and coefficient of determination ($R^2$) value to get an estimate of ‘goodness’ of fit. I use it all the time for simpler fittings (linear, exponential, power etc).
# Getting the standard deviation standarddevparams2 = np.sqrt(np.diag(covariance2)) # Getting the R ^ 2 value for the fitting # Read more at https: //en.wikipedia.org/wiki/Coefficient_of_determination # Step 1: Get the residuals for fitting residuals = y - linear_fit(x, params2[0], params2[1]) # Step 2: Get the sum of squares of residual squaresumofresiduals = np.sum(residuals ** 2) # Step 3: Get the total sum of squares using the mean of y values squaresum = np.sum((y - np.mean(y)) ** 2) # Step 4: Get the R ^ 2 R2 = 1 - (squaresumofresiduals / squaresum) print('The slope (m) is ', params2[0], '+-', standarddevparams2[0]) print('The intercept (c) is ', params2[1], '+-', standarddevparams2[1]) print('The R^2 value is ', R2)
Let’s try our linear fitting using lmfit!
'' ' 1. Importing the module.If you dont have lmfit, you can download it on pip or conda.Instructions at https: //lmfit.github.io/lmfit-py/installation.html 2. Parameters is the main parameters class. 3. minimize is the main fitting function.Takes our parameters and spits the best fit parameters. 4. fit_report provides the parameter fit values and different statistical values. '' ' from lmfit import Parameters, minimize, fit_report # Define the fitting function def linear_fitting_lmfit(params, x, y): m = params['m'] c = params['c'] y_fit = m * x + c return y_fit - y # Defining the various parameters params = Parameters() # Slope is bounded between min value of 1.0 and max value of 3.0 params.add('m', min = 1.0, max = 3.0) # Intercept is made fixed at 0.0 value params.add('c', value = 0.0, vary = False) # Calling the minimize function.Args contains the x and y data. fitted_params = minimize(linear_fitting_lmfit, params, args = (x, y, ), method = 'least_squares') # Getting the fitted values m = fitted_params.params['m'].value c = fitted_params.params['c'].value # Printing the fitted values print('The slope (m) is ', m) print('The intercept (c) is ', c) # Pretty printing all the statistical data print(fit_report(fitted_params))
Functions are first class objects,Is the function convex?,Universal functions (Ufuncs),Function argumnents Call by “object reference” Binding of default arguments occurs at function definition
import os
import sys
import glob
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%
matplotlib inline %
precision 4
plt.style.use('ggplot')
import scipy.linalg as la
def f(x):
return x ** 3 - 3 * x + 1
x = np.linspace(-3, 3, 100) plt.axhline(0) plt.plot(x, f(x));
from scipy.optimize
import brentq, newton
brentq(f, -3, 0), brentq(f, 0, 1), brentq(f, 1, 3)