qml.RMSPropOptimizer¶
- class RMSPropOptimizer(stepsize=0.01, decay=0.9, eps=1e-08)[source]¶
Bases:
pennylane.optimize.adagrad.AdagradOptimizer
Root mean squared propagation optimizer.
The root mean square propagation optimizer is a modified
Adagrad optimizer
, with a decay of learning rate adaptation.Extensions of the Adagrad optimization method generally start the sum \(a\) over past gradients in the denominator of the learning rate at a finite \(t'\) with \(0 < t' < t\), or decay past gradients to avoid an ever-decreasing learning rate.
Root Mean Square propagation is such an adaptation, where
\[a_i^{(t+1)} = \gamma a_i^{(t)} + (1-\gamma) (\partial_{x_i} f(x^{(t)}))^2.\]- Parameters
stepsize (float) – the user-defined hyperparameter \(\eta\) used in the Adagrad optimization
decay (float) – the learning rate decay \(\gamma\)
eps (float) – offset \(\epsilon\) added for numerical stability (see
Adagrad
)
Methods
apply_grad
(grad, args)Update the variables args to take a single optimization step.
compute_grad
(objective_fn, args, kwargs[, ...])Compute the gradient of the objective function at the given point and return it along with the objective function forward pass (if available).
reset
()Reset optimizer by erasing memory of past steps.
step
(objective_fn, *args[, grad_fn])Update trainable arguments with one step of the optimizer.
step_and_cost
(objective_fn, *args[, grad_fn])Update trainable arguments with one step of the optimizer and return the corresponding objective function value prior to the step.
- apply_grad(grad, args)[source]¶
Update the variables args to take a single optimization step. Flattens and unflattens the inputs to maintain nested iterables as the parameters of the optimization.
- Parameters
grad (tuple [array]) – the gradient of the objective function at point \(x^{(t)}\): \(\nabla f(x^{(t)})\).
args (tuple) – the current value of the variables \(x^{(t)}\).
- Returns
the new values \(x^{(t+1)}\)
- Return type
list [array]
- static compute_grad(objective_fn, args, kwargs, grad_fn=None)¶
Compute the gradient of the objective function at the given point and return it along with the objective function forward pass (if available).
- Parameters
objective_fn (function) – the objective function for optimization
args (tuple) – tuple of NumPy arrays containing the current parameters for the objection function
kwargs (dict) – keyword arguments for the objective function
grad_fn (function) – optional gradient function of the objective function with respect to the variables
args
. IfNone
, the gradient function is computed automatically. Must return the same shape of tuple [array] as the autograd derivative.
- Returns
NumPy array containing the gradient \(\nabla f(x^{(t)})\) and the objective function output. If
grad_fn
is provided, the objective function will not be evaluated and insteadNone
will be returned.- Return type
tuple (array)
- reset()¶
Reset optimizer by erasing memory of past steps.
- step(objective_fn, *args, grad_fn=None, **kwargs)¶
Update trainable arguments with one step of the optimizer.
- Parameters
objective_fn (function) – the objective function for optimization
*args – Variable length argument list for objective function
grad_fn (function) – optional gradient function of the objective function with respect to the variables
x
. IfNone
, the gradient function is computed automatically. Must return atuple[array]
with the same number of elements as*args
. Each array of the tuple should have the same shape as the corresponding argument.**kwargs – variable length of keyword arguments for the objective function
- Returns
the new variable values \(x^{(t+1)}\). If single arg is provided, list [array] is replaced by array.
- Return type
list [array]
- step_and_cost(objective_fn, *args, grad_fn=None, **kwargs)¶
Update trainable arguments with one step of the optimizer and return the corresponding objective function value prior to the step.
- Parameters
objective_fn (function) – the objective function for optimization
*args – variable length argument list for objective function
grad_fn (function) – optional gradient function of the objective function with respect to the variables
*args
. IfNone
, the gradient function is computed automatically. Must return atuple[array]
with the same number of elements as*args
. Each array of the tuple should have the same shape as the corresponding argument.**kwargs – variable length of keyword arguments for the objective function
- Returns
the new variable values \(x^{(t+1)}\) and the objective function output prior to the step. If single arg is provided, list [array] is replaced by array.
- Return type
tuple[list [array], float]