qml.QNGOptimizerQJIT¶
- class QNGOptimizerQJIT(stepsize=0.01, approx='block-diag', lam=0)[source]¶
Bases:
object
Optax-like and
jax.jit
/qml.qjit
-compatible implementation of theQNGOptimizer
, a step- and parameter-dependent learning rate optimizer, leveraging a reparameterization of the optimization space based on the Fubini-Study metric tensor.For more theoretical details, see the
QNGOptimizer
documentation.Note
Please be aware of the following:
As with
QNGOptimizer
,QNGOptimizerQJIT
supports a single QNode to encode the objective function.QNGOptimizerQJIT
does not support any QNode with multiple arguments. A potential workaround would be to combine all parameters into a single objective function argument.QNGOptimizerQJIT
does not work correctly if there is any classical processing in the QNode circuit (e.g.,2 * theta
as a gate parameter).
Example:
Consider a hybrid workflow to optimize an objective function defined by a quantum circuit. To make the optimization faster, the entire workflow can be just-in-time compiled using the
qml.qjit
decorator:import pennylane as qml import jax.numpy as jnp @qml.qjit(autograph=True) def workflow(): dev = qml.device("lightning.qubit", wires=2) @qml.qnode(dev) def circuit(params): qml.RX(params[0], wires=0) qml.RY(params[1], wires=1) return qml.expval(qml.Z(0) + qml.X(1)) opt = qml.QNGOptimizerQJIT(stepsize=0.2) params = jnp.array([0.1, 0.2]) state = opt.init(params) for _ in range(100): params, state = opt.step(circuit, params, state) return params
>>> workflow() Array([ 3.14159265, -1.57079633], dtype=float64)
Make sure you are using the
lightning.qubit
device along withqml.qjit
withautograph
enabled.Using the
jax.jit
decorator for the entire workflow is not recommended since it may lead to a significative compilation time and no runtime benefits. However,jax.jit
can be used with thedefault.qubit
device to just-in-time compile thestep
(orstep_and_cost
) method of the optimizer. For example:import pennylane as qml import jax.numpy as jnp import jax from functools import partial dev = qml.device("default.qubit", wires=2) @qml.qnode(dev) def circuit(params): qml.RX(params[0], wires=0) qml.RY(params[1], wires=1) return qml.expval(qml.Z(0) + qml.X(1)) opt = qml.QNGOptimizerQJIT(stepsize=0.2) step = jax.jit(partial(opt.step, circuit)) params = jnp.array([0.1, 0.2]) state = opt.init(params) for _ in range(100): params, state = step(params, state)
>>> params Array([ 3.14159265, -1.57079633], dtype=float64)
- Keyword Arguments:
stepsize=0.01 (float) – the user-defined stepsize hyperparameter
approx="block-diag" (str) –
approximation method for the metric tensor.
If
None
, the full metric tensor is computedIf
"block-diag"
, the block-diagonal approximation is computed, reducing the number of evaluated circuits significantlyIf
"diag"
, the diagonal approximation is computed, slightly reducing the classical overhead but not the quantum resources (compared to"block-diag"
)
lam=0 (float) – metric tensor regularization to be applied at each optimization step
Methods
init
(params)Return the initial state of the optimizer.
step
(qnode, params, state, **kwargs)Update the QNode parameters and the optimizer's state for a single optimization step.
step_and_cost
(qnode, params, state, **kwargs)Update the QNode parameters and the optimizer's state for a single optimization step and return the corresponding objective function value prior to the step.
- init(params)[source]¶
Return the initial state of the optimizer.
- Parameters:
params (array) – QNode parameters
- Returns:
None
Note
Since the Quantum Natural Gradient (QNG) algorithm doesn’t actually require any particular state, this method always returns an empty
None
state. However, it is provided to match theoptax
-like interface for all Jax-based quantum-specific optimizers.
- step(qnode, params, state, **kwargs)[source]¶
Update the QNode parameters and the optimizer’s state for a single optimization step.
- Parameters:
qnode (QNode) – QNode objective function to be optimized
params (array) – QNode parameters to be updated
state – current state of the optimizer
**kwargs – variable-length keyword arguments for the QNode
- Returns:
(new parameters values, new optimizer’s state)
- Return type:
tuple
Note
Since the Quantum Natural Gradient (QNG) algorithm doesn’t actually require any particular state, the
state
object is never really updated in this case. However, it is carried over the optimization to match theoptax
-like interface for all Jax-based quantum-specific optimizers.
- step_and_cost(qnode, params, state, **kwargs)[source]¶
Update the QNode parameters and the optimizer’s state for a single optimization step and return the corresponding objective function value prior to the step.
- Parameters:
qnode (QNode) – QNode objective function to be optimized
params (array) – QNode parameters to be updated
state – current state of the optimizer
**kwargs – variable-length keyword arguments for the QNode
- Returns:
(new parameters values, new optimizer’s state, objective function value)
- Return type:
tuple
Note
Since the Quantum Natural Gradient (QNG) algorithm doesn’t actually require any particular state, the
state
object is never really updated in this case. However, it is carried over the optimization to match theoptax
-like interface for all Jax-based quantum-specific optimizers.