You can simply use something like
myfactr = 1e2
r = scipy.optimize.minimize(..., options = {
'ftol': myfactr * np.finfo(float).eps)
If not given, chosen to be one of BFGS, L-BFGS-B, SLSQP, depending if the problem has constraints or bounds.,Bounds for variables (only for L-BFGS-B, TNC and SLSQP). (min, max) pairs for each element in x, defining the bounds on that parameter. Use None for one of min or max when there is no bound in that direction.,Let us consider the problem of minimizing the Rosenbrock function. This function (and its respective derivatives) is implemented in rosen (resp. rosen_der, rosen_hess) in the scipy.optimize.,It may be useful to pass a custom minimization method, for example when using a frontend to this method such as scipy.optimize.basinhopping or a different library. You can simply pass a callable as the method parameter.
>>> from scipy.optimize
import minimize, rosen, rosen_der
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2] >>>
res = minimize(rosen, x0, method = 'Nelder-Mead') >>>
res.x[1. 1. 1. 1. 1.]
>>> res = minimize(rosen, x0, method = 'BFGS', jac = rosen_der,
...options = {
'gtol': 1e-6,
'disp': True
})
Optimization terminated successfully.
Current
function value: 0.000000
Iterations: 52
Function evaluations: 64
Gradient evaluations: 64 >>>
res.x[1. 1. 1. 1. 1.] >>>
print res.message
Optimization terminated successfully. >>>
res.hess[[0.00749589 0.01255155 0.02396251 0.04750988 0.09495377]
[0.01255155 0.02510441 0.04794055 0.09502834 0.18996269]
[0.02396251 0.04794055 0.09631614 0.19092151 0.38165151]
[0.04750988 0.09502834 0.19092151 0.38341252 0.7664427]
[0.09495377 0.18996269 0.38165151 0.7664427 1.53713523]]
>>> fun = lambda x: (x[0] - 1) ** 2 + (x[1] - 2.5) ** 2
>>> cons = ({
'type': 'ineq',
'fun': lambda x: x[0] - 2 * x[1] + 2
},
...{
'type': 'ineq',
'fun': lambda x: -x[0] - 2 * x[1] + 6
},
...{
'type': 'ineq',
'fun': lambda x: -x[0] + 2 * x[1] + 2
})
>>> bnds = ((0, None), (0, None))
Broyden-Fletcher-Goldfarb-Shanno algorithm (method='BFGS'), Broyden-Fletcher-Goldfarb-Shanno algorithm ( method='BFGS' ) ,Another optimization algorithm that needs only function calls to find the minimum is Powell’s method available by setting method='powell' in minimize.,For the details about mathematical algorithms behind the implementation refer to documentation of least_squares.
>>>
import numpy as np
>>>
from scipy.optimize
import minimize
>>> def rosen(x):
...""
"The Rosenbrock function"
""
...
return sum(100.0 * (x[1: ] - x[: -1] ** 2.0) ** 2.0 + (1 - x[: -1]) ** 2.0)
>>> x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2]) >>>
res = minimize(rosen, x0, method = 'nelder-mead',
...options = {
'xatol': 1e-8,
'disp': True
})
Optimization terminated successfully.
Current
function value: 0.000000
Iterations: 339
Function evaluations: 571
>>> print(res.x)[1. 1. 1. 1. 1.]
>>> def rosen_with_args(x, a, b):
...""
"The Rosenbrock function with additional arguments"
""
...
return sum(a * (x[1: ] - x[: -1] ** 2.0) ** 2.0 + (1 - x[: -1]) ** 2.0) + b
>>> x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2]) >>>
res = minimize(rosen_with_args, x0, method = 'nelder-mead',
...args = (0.5, 1.), options = {
'xatol': 1e-8,
'disp': True
})
Optimization terminated successfully.
Current
function value: 1.000000
Iterations: 319
Function evaluations: 525
Many optimization methods rely on gradients of the objective function. If the gradient function is not given, they are computed numerically, which induces errors. In such situation, even if the objective function is not noisy, a gradient-based optimization may be a noisy optimization.,scipy provides scipy.optimize.minimize() to find the minimum of scalar functions of one or more variables. The simple conjugate gradient method can be used by setting the parameter method to CG,A review of the different optimizers Getting started: 1D optimization Gradient based methods Newton and quasi-newton methods ,Let’s get started by finding the minimum of the scalar function . scipy.optimize.minimize_scalar() uses Brent’s method to find the minimum of a function:
>>> from scipy
import optimize
>>>
def f(x):
...
return -np.exp(-(x - 0.7) ** 2) >>>
result = optimize.minimize_scalar(f) >>>
result.success # check
if solver was successful
True
>>>
x_min = result.x >>>
x_min
0.699999999...
>>>
x_min - 0.7 -
2.16...e - 10
>>> def f(x): # The rosenbrock
function
...
return .5 * (1 - x[0]) ** 2 + (x[1] - x[0] ** 2) ** 2 >>>
optimize.minimize(f, [2, -1], method = "CG")
fun: 1.6...e - 11
jac: array([-6.15...e - 06, 2.53...e - 07])
message: ...'Optimization terminated successfully.'
nfev: 108
nit: 13
njev: 27
status: 0
success: True
x: array([0.99999..., 0.99998...])
>>> def jacobian(x):
...
return np.array((-2 * .5 * (1 - x[0]) - 4 * x[0] * (x[1] - x[0] ** 2), 2 * (x[1] - x[0] ** 2))) >>>
optimize.minimize(f, [2, 1], method = "CG", jac = jacobian)
fun: 2.957...e - 14
jac: array([7.1825...e - 07, -2.9903...e - 07])
message: 'Optimization terminated successfully.'
nfev: 16
nit: 8
njev: 16
status: 0
success: True
x: array([1.0000..., 1.0000...])
>>> def f(x): # The rosenbrock
function
...
return .5 * (1 - x[0]) ** 2 + (x[1] - x[0] ** 2) ** 2 >>>
def jacobian(x):
...
return np.array((-2 * .5 * (1 - x[0]) - 4 * x[0] * (x[1] - x[0] ** 2), 2 * (x[1] - x[0] ** 2))) >>>
optimize.minimize(f, [2, -1], method = "Newton-CG", jac = jacobian)
fun: 1.5...e - 15
jac: array([1.0575...e - 07, -7.4832...e - 08])
message: ...'Optimization terminated successfully.'
nfev: 11
nhev: 0
nit: 10
njev: 52
status: 0
success: True
x: array([0.99999..., 0.99999...])
>>> def hessian(x): # Computed with sympy
...
return np.array(((1 - 4 * x[1] + 12 * x[0] ** 2, -4 * x[0]), (-4 * x[0], 2))) >>>
optimize.minimize(f, [2, -1], method = "Newton-CG", jac = jacobian, hess = hessian)
fun: 1.6277...e - 15
jac: array([1.1104...e - 07, -7.7809...e - 08])
message: ...'Optimization terminated successfully.'
nfev: 11
nhev: 10
nit: 10
njev: 20
status: 0
success: True
x: array([0.99999..., 0.99999...])
>>> def f(x): # The rosenbrock
function
...
return .5 * (1 - x[0]) ** 2 + (x[1] - x[0] ** 2) ** 2 >>>
def jacobian(x):
...
return np.array((-2 * .5 * (1 - x[0]) - 4 * x[0] * (x[1] - x[0] ** 2), 2 * (x[1] - x[0] ** 2))) >>>
optimize.minimize(f, [2, -1], method = "BFGS", jac = jacobian)
fun: 2.6306...e - 16
hess_inv: array([
[0.99986..., 2.0000...],
[2.0000..., 4.498...]
])
jac: array([6.7089...e - 08, -3.2222...e - 08])
message: ...'Optimization terminated successfully.'
nfev: 10
nit: 8
njev: 10
status: 0
success: True
x: array([1., 0.99999...])