Gradients and training¶
PennyLane offers seamless integration between classical and quantum computations. Code up quantum circuits in PennyLane, compute gradients of quantum circuits, and connect them easily to the top scientific computing and machine learning libraries.
Training and interfaces¶
The bridge between the quantum and classical worlds is provided in PennyLane via interfaces to automatic differentiation libraries. Currently, four libraries are supported: NumPy, PyTorch, JAX, and TensorFlow. PennyLane makes each of these libraries quantum-aware, allowing quantum circuits to be treated just like any other operation. Any automatic differentiation framework can be chosen with any device.
In PennyLane, an automatic differentiation framework is declared using the interface
argument when creating
a QNode
, e.g.,
@qml.qnode(dev, interface="tf")
def my_quantum_circuit(...):
...
Note
If no interface is specified, PennyLane will automatically determine the interface based on provided arguments and keyword arguments.
See qml.workflow.SUPPORTED_INTERFACES
for a list of all accepted interface strings.
This will allow native numerical objects of the specified library (NumPy arrays, JAX arrays, Torch Tensors, or TensorFlow Tensors) to be passed as parameters to the quantum circuit. It also makes the gradients of the quantum circuit accessible to the classical library, enabling the optimization of arbitrary hybrid circuits by making use of the library’s native optimizers.
When specifying an interface, the objects of the chosen framework are converted into NumPy objects and are passed to a device in most cases. Exceptions include cases when the devices support end-to-end computations in a framework. Such devices may be referred to as backpropagation or passthru devices.
See the links below for walkthroughs of each specific interface:
In addition to the core automatic differentiation frameworks discussed above,
PennyLane also provides higher-level classes for converting QNodes into both Keras and torch.nn
layers:
Converts a Note QNodes that allow for automatic differentiation will always incur a small overhead on evaluation.
If you do not need to compute quantum gradients of a QNode, specifying Optimizers are objects which can be used to automatically update the parameters of a quantum
or hybrid machine learning model. The optimizers you should use are dependent on your choice
of the classical autodifferentiation library, and are available from different access
points. When using the standard NumPy framework, PennyLane offers some built-in optimizers.
Some of these are specific to quantum optimization, such as the Gradient-descent optimizer with past-gradient-dependent learning rate in each dimension. Gradient-descent optimizer with adaptive learning rate, first and second moment. Optimizer for building fully trained quantum circuits by adding gates adaptively. Basic gradient-descent optimizer. Gradient-descent optimizer with momentum. Gradient-descent optimizer with Nesterov momentum. Optimizer with adaptive learning rate, via calculation of the diagonal or block-diagonal approximation to the Fubini-Study metric tensor. Riemannian gradient optimizer. Root mean squared propagation optimizer. Rotosolve gradient-free optimizer. Rotoselect gradient-free optimizer. Optimizer where the shot rate is adaptively calculated using the variances of the parameter-shift gradient. The Simultaneous Perturbation Stochastic Approximation method (SPSA) is a stochastic approximation algorithm for optimizing cost functions whose evaluation may involve noise. Quantum natural SPSA (QNSPSA) optimizer. If you are using the PennyLane PyTorch framework, you should import one of the native
PyTorch optimizers (found in When using the PennyLane TensorFlow framework, you will need to leverage one of
the TensorFlow optimizers
(found in Check out the JAXopt and the Optax packages to find optimizers for the
PennyLane JAX framework. The interface between PennyLane and automatic differentiation libraries relies on PennyLane’s ability
to compute or estimate gradients of quantum circuits. There are different strategies to do so, and they may
depend on the device used. When creating a QNode, you can specify the differentiation method like this: PennyLane currently provides the following differentiation methods for QNodes: The following methods use reverse accumulation to compute
gradients; a well-known example of this approach is backpropagation. These methods are not hardware compatible; they are only supported on
statevector simulator devices such as However, for rapid prototyping on simulators, these methods typically out-perform forward-mode
accumulators such as the parameter-shift rule and finite-differences. For more details, see the
quantum backpropagation demonstration. This differentiation method is only allowed on simulator
devices that are classically end-to-end differentiable, for example
The adjoint method reverses through the circuit after a
forward pass by iteratively applying the inverse (adjoint) gate. This method is similar to
The following methods support both quantum hardware and simulators, and are examples of forward
accumulation.
However, when using a simulator, you may notice that the number of circuit executions required to
compute the gradients with these methods scales linearly
with the number of trainable circuit parameters. Note If not specified, the default differentiation method is In addition to registering the differentiation method of QNodes to be used with autodifferentiation
frameworks, PennyLane also provides a library of gradient transforms via the
Quantum gradient transforms are strategies for computing the gradient of a quantum
circuit that work by transforming the quantum circuit into one or more gradient circuits.
They accompany these circuits with a function that post-processes their output.
These gradient circuits, once executed and post-processed, return the gradient
of the original circuit. Examples of quantum gradient transforms include finite-difference rules and parameter-shift
rules; these can be applied directly to QNodes: Note that, while gradient transforms allow quantum gradient rules to be applied directly to QNodes,
this is not a replacement — and should not be used instead of — standard training workflows (for example,
Gradient transforms are themselves differentiable, allowing higher-order
gradients to be computed: Another way to compute higher-order derivatives is by passing the Note that the The table below show all the currently supported functionality for the The interface, e.g. The differentiation method, e.g. The return value of the QNode, e.g. The number of shots, either None or an integer > 0 Return type Interface Differentiation method state density matrix probs sample expval (obs) expval (herm) expval (proj) var vn entropy mutual info 1 1 1 9 1 1 1 1 1 1 1 1 1 9 1 1 1 1 1 1 2 2 2 9 2 2 2 2 2 2 2 2 2 9 2 2 2 2 2 2 2 2 2 9 2 2 2 2 2 2 2 2 2 9 2 2 2 2 2 2 2 2 2 9 2 2 2 2 2 2 3 3 3 9 3 3 3 3 3 3 4 4 5 9 5 5 5 5 5 5 7 7 7 9 7 7 7 7 7 7 10 10 8 9 8 8 8 8 10 10 10 10 8 9 8 8 8 8 8 8 10 10 8 9 8 8 8 8 8 8 10 10 8 9 8 8 8 11 10 10 3 3 3 9 3 3 3 3 3 3 5 5 5 9 5 5 5 5 5 5 7 7 7 9 7 7 7 7 7 7 10 10 8 9 8 8 8 8 10 10 10 10 8 9 8 8 8 8 8 8 10 10 8 9 8 8 8 8 8 8 10 10 8 9 8 8 8 11 10 10 3 3 3 9 3 3 3 3 3 3 5 5 5 9 5 5 5 5 5 5 7 7 7 9 7 7 7 7 7 7 10 10 8 9 8 8 8 8 10 10 10 10 8 9 8 8 8 8 8 8 10 10 8 9 8 8 8 8 8 8 10 10 8 9 8 8 8 11 10 10 3 3 3 9 3 3 3 3 3 3 5 5 5 9 5 5 5 5 5 5 7 7 7 9 7 7 7 7 7 7 10 10 8 9 8 8 8 8 10 10 10 10 8 9 8 8 8 8 8 8 10 10 8 9 8 8 8 8 8 8 10 10 8 9 8 8 8 11 10 10 Not supported. Gradients are not computed even though Not supported. Gradients are not computed even though Not supported. The Supported, but only when If the circuit returns a state, then the circuit itself is not differentiable
directly. However, any real scalar-valued post-processing done to the output of the
circuit will be differentiable. See State gradients for details. Supported, but only when Not supported. The adjoint differentiation algorithm is only implemented for analytic simulation. See
Adjoint differentation for details. Supported. Raises error when Supported. Not supported. The discretization of the output caused by wave function collapse is
not differentiable. The forward pass is still supported. See Sample gradients for details. Not supported. “We just don’t have the theory yet.” Not implemented.
pennylane.qnn.KerasLayer
(*args, **kwargs)
pennylane.qnn.TorchLayer
(qnode, weight_shapes)QNode
to a Torch layer.interface=None
will remove
this overhead and result in a slightly faster evaluation. However, gradients will no
longer be available.Optimizers¶
NumPy¶
QNGOptimizer
,
RiemannianGradientOptimizer
, RotosolveOptimizer
, RotoselectOptimizer
,
ShotAdaptiveOptimizer
, and QNSPSAOptimizer
.
PyTorch¶
torch.optim
).TensorFlow¶
tf.keras.optimizers
).JAX¶
Gradients¶
@qml.qnode(dev, diff_method="parameter-shift")
def circuit(x):
qml.RX(x, wires=0)
return qml.probs(wires=0)
Simulation-based differentiation¶
default.qubit
.
"backprop"
: Use standard backpropagation.default.qubit
. This method does not work on devices
that estimate measurement statistics using a finite number of shots; please use
the parameter-shift
rule instead."adjoint"
: Use a form of backpropagation that takes advantage of the unitary or reversible
nature of quantum computation."backprop"
, but has significantly lower memory usage and a similar runtime.Hardware-compatible differentiation¶
"parameter-shift"
: Use the analytic parameter-shift rule for all supported quantum operation arguments, with
finite-difference as a fallback."finite-diff"
: Use numerical finite-differences for all quantum operation arguments."hadamard"
: Use hadamard tests on the generators for all compatible quantum operations arguments.qml.gradients.stoch_pulse_grad
: Use a stochastic variant of the
parameter-shift rule for pulse programs.qml.gradients.pulse_odegen
: Combine classical processing with the parameter-shift rule for multivariate gates to differentiate pulse programs.Device gradients¶
"device"
: Queries the device directly for the gradient.
Only allowed on devices that provide their own gradient computation.diff_method="best"
. PennyLane
will attempt to determine the best differentiation method given the device and interface.
Typically, PennyLane will prioritize device-provided gradients, backpropagation, parameter-shift
rule, and finally finite differences, in that order.Gradient transforms¶
qml.gradients
module.dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.probs(wires=1)
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> circuit(weights)
tensor([0.9658079, 0.0341921], requires_grad=True)
>>> qml.gradients.param_shift(circuit)(weights)
(tensor([-0.04673668, 0.04673668], requires_grad=True),
tensor([-0.09442394, 0.09442394], requires_grad=True),
tensor([-0.14409127, 0.14409127], requires_grad=True))
qml.grad()
if using Autograd, loss.backward()
for PyTorch, or tape.gradient()
for TensorFlow).
This is because gradient transforms do not take into account classical computation nodes, and only
support gradients of QNodes.
For more details on available gradient transforms, as well as learning how to define your own
gradient transform, please see the qml.gradients
documentation.Differentiating gradient transforms and higher-order derivatives¶
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.expval(qml.PauliZ(1))
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> circuit(weights)
tensor(0.9316158, requires_grad=True)
>>> qml.gradients.param_shift(circuit)(weights) # gradient
(tensor(-0.09347337, requires_grad=True),
tensor(-0.18884787, requires_grad=True),
tensor(-0.28818254, requires_grad=True))
>>> def f(weights):
... return np.stack(qml.gradients.param_shift(circuit)(weights))
>>> qml.jacobian(f)(weights) # hessian
array([[[-0.9316158 , 0.01894799, 0.0289147 ],
[ 0.01894799, -0.9316158 , 0.05841749],
[ 0.0289147 , 0.05841749, -0.9316158 ]]])
max_diff
and
diff_method
arguments to the QNode and by successive differentiation:@qml.qnode(dev, diff_method="parameter-shift", max_diff=2)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.expval(qml.PauliZ(1))
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> qml.jacobian(qml.jacobian(circuit))(weights) # hessian
array([[-0.9316158 , 0.01894799, 0.0289147 ],
[ 0.01894799, -0.9316158 , 0.05841749],
[ 0.0289147 , 0.05841749, -0.9316158 ]])
max_diff
argument only applies to gradient transforms and that its default value is 1
; failing to
set its value correctly may yield incorrect results for higher-order derivatives. Also, passing
diff_method="parameter-shift"
is equivalent to passing diff_method=qml.gradients.param_shift
.Supported configurations¶
"default.qubit"
device.
At the moment, it takes into account the following parameters:
"jax"
"parameter-shift"
qml.expval()
or qml.probs()
None
"device"
"backprop"
"adjoint"
"parameter-shift"
"finite-diff"
"spsa"
"hadamard"
"autograd"
"device"
"backprop"
"adjoint"
"parameter-shift"
"finite-diff"
"spsa"
"hadamard"
"jax"
"device"
"backprop"
"adjoint"
"parameter-shift"
"finite-diff"
"spsa"
"hadamard"
"tf"
"device"
"backprop"
"adjoint"
"parameter-shift"
"finite-diff"
"spsa"
"hadamard"
"torch"
"device"
"backprop"
"adjoint"
"parameter-shift"
"finite-diff"
"spsa"
"hadamard"
diff_method
is provided. Fails with error.diff_method
is provided. Warns that no auto-differentiation framework is being used, but does not fail.
Forward pass is still supported.default.qubit
device does not provide a native way to compute gradients. See
Device jacobian for details.shots=None
. See Backpropagation for details.shots=None
. See Backpropagation for details.shots>0
since the gradient is always computed analytically. See
Adjoint differentation for details.