Processing math: 100%

qml.MomentumQNGOptimizer

class MomentumQNGOptimizer(stepsize=0.01, momentum=0.9, approx='block-diag', lam=0)[source]

Bases: pennylane.optimize.qng.QNGOptimizer

A generalization of the Quantum Natural Gradient (QNG) optimizer by considering a discrete-time Langevin equation with QNG force. For details of the theory and derivation of Momentum-QNG, please see:

Oleksandr Borysenko, Mykhailo Bratchenko, Ilya Lukin, Mykola Luhanko, Ihor Omelchenko, Andrii Sotnikov and Alessandro Lomi. “Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm” arXiv:2409.01978

We are grateful to David Wierichs for his generous help with the multi-argument variant of the MomentumQNGOptimizer class.

MomentumQNGOptimizer is a subclass of QNGOptimizer that requires one additional hyperparameter (the momentum coefficient) 0ρ<1, the default value being ρ=0.9. For ρ=0 Momentum-QNG reduces to the basic QNG. In this way, the parameter update rule in Momentum-QNG reads:

x(t+1)=x(t)+ρ(x(t)x(t1))ηg(f(x(t)))1f(x(t)),

where η is a stepsize (learning rate) value, g(f(x(t)))1 is the pseudo-inverse of the Fubini-Study metric tensor and f(x(t))=0|U(x(t))ˆBU(x(t))|0 is an expectation value of some observable measured on the variational quantum circuit U(x(t)).

Examples:

Consider an objective function realized as a QNode that returns the expectation value of a Hamiltonian.

>>> dev = qml.device("default.qubit", wires=(0, 1, "aux"))
>>> @qml.qnode(dev)
... def circuit(params):
...     qml.RX(params[0], wires=0)
...     qml.RY(params[1], wires=0)
...     return qml.expval(qml.X(0))

Once constructed, the cost function can be passed directly to the optimizer’s step() function. In addition to the standard learning rate, the MomentumQNGOptimizer takes a momentum parameter:

>>> eta = 0.01
>>> rho = 0.93
>>> init_params = qml.numpy.array([0.5, 0.23], requires_grad=True)
>>> opt = qml.MomentumQNGOptimizer(stepsize=eta, momentum=rho)
>>> theta_new = opt.step(circuit, init_params)
>>> theta_new
tensor([0.50437193, 0.18562052], requires_grad=True)

An alternative function to calculate the metric tensor of the QNode can be provided to step via the metric_tensor_fn keyword argument, see QNGOptimizer for details.

See also

For details on quantum natural gradient, see QNGOptimizer. See MomentumOptimizer for a first-order optimizer with momentum. Also see the examples from the reference above, benchmarking the Momentum-QNG optimizer against the basic QNG, Momentum and Adam:

Keyword Arguments
  • stepsize=0.01 (float) – the user-defined hyperparameter η

  • momentum=0.9 (float) – the user-defined hyperparameter ρ

  • approx (str) –

    Which approximation of the metric tensor to compute.

    • If None, the full metric tensor is computed

    • If "block-diag", the block-diagonal approximation is computed, reducing the number of evaluated circuits significantly.

    • If "diag", only the diagonal approximation is computed, slightly reducing the classical overhead but not the quantum resources (compared to "block-diag").

  • lam=0 (float) – metric tensor regularization Gij+λI to be applied at each optimization step

apply_grad(grad, args)

Update the parameter array x for a single optimization step.

compute_grad(objective_fn, args, kwargs[, ...])

Compute the gradient of the objective function at the given point and return it along with the objective function forward pass (if available).

step(qnode, *args[, grad_fn, ...])

Update the parameter array x with one step of the optimizer.

step_and_cost(qnode, *args[, grad_fn, ...])

Update the parameter array x with one step of the optimizer and return the corresponding objective function value prior to the step.

apply_grad(grad, args)[source]

Update the parameter array x for a single optimization step. Flattens and unflattens the inputs to maintain nested iterables as the parameters of the optimization.

Parameters
  • grad (array) – The gradient of the objective function at point x(t): f(x(t))

  • args (array) – the current value of the variables x(t)

Returns

the new values x(t+1)

Return type

array

static compute_grad(objective_fn, args, kwargs, grad_fn=None)

Compute the gradient of the objective function at the given point and return it along with the objective function forward pass (if available).

Parameters
  • objective_fn (function) – the objective function for optimization

  • args (tuple) – tuple of NumPy arrays containing the current parameters for the objection function

  • kwargs (dict) – keyword arguments for the objective function

  • grad_fn (function) – optional gradient function of the objective function with respect to the variables args. If None, the gradient function is computed automatically. Must return the same shape of tuple [array] as the autograd derivative.

Returns

NumPy array containing the gradient f(x(t)) and the objective function output. If grad_fn is provided, the objective function will not be evaluated and instead None will be returned.

Return type

tuple (array)

step(qnode, *args, grad_fn=None, recompute_tensor=True, metric_tensor_fn=None, **kwargs)

Update the parameter array x with one step of the optimizer.

Parameters
  • qnode (QNode) – the QNode for optimization

  • *args – variable length argument list for qnode

  • grad_fn (function) – optional gradient function of the qnode with respect to the variables *args. If None, the gradient function is computed automatically. Must return a tuple[array] with the same number of elements as *args. Each array of the tuple should have the same shape as the corresponding argument.

  • recompute_tensor (bool) – Whether or not the metric tensor should be recomputed. If not, the metric tensor from the previous optimization step is used.

  • metric_tensor_fn (function) – Optional metric tensor function with respect to the variables args. If None, the metric tensor function is computed automatically.

  • **kwargs – variable length of keyword arguments for the qnode

Returns

the new variable values x(t+1)

Return type

array

step_and_cost(qnode, *args, grad_fn=None, recompute_tensor=True, metric_tensor_fn=None, **kwargs)

Update the parameter array x with one step of the optimizer and return the corresponding objective function value prior to the step.

Parameters
  • qnode (QNode) – the QNode for optimization

  • *args – variable length argument list for qnode

  • grad_fn (function) – optional gradient function of the qnode with respect to the variables *args. If None, the gradient function is computed automatically. Must return a tuple[array] with the same number of elements as *args. Each array of the tuple should have the same shape as the corresponding argument.

  • recompute_tensor (bool) – Whether or not the metric tensor should be recomputed. If not, the metric tensor from the previous optimization step is used.

  • metric_tensor_fn (function) – Optional metric tensor function with respect to the variables args. If None, the metric tensor function is computed automatically.

  • **kwargs – variable length of keyword arguments for the qnode

Returns

the new variable values x(t+1) and the objective function output prior to the step

Return type

tuple

Contents

Using PennyLane

Release news

Development

API

Internals