qml.gradients.hadamard_grad¶
- hadamard_grad(tape, argnum=None, aux_wire=None, device_wires=None, mode='standard')[source]¶
Transform a circuit to compute the Hadamard test gradient of all gates with respect to their inputs.
- Parameters:
tape (QNode or QuantumTape) – quantum circuit to differentiate
argnum (int or list[int] or None) – Trainable tape parameter indices to differentiate with respect to. If not provided, the derivatives with respect to all trainable parameters are returned. Note that the indices are with respect to the list of trainable parameters.
aux_wire (pennylane.wires.Wires) – Auxiliary wire to be used for the Hadamard tests. If
None(the default) andmodeis “standard” or “reversed”, a suitable wire is inferred from the wires used in the original circuit anddevice_wires.device_wires (pennylane.wires.Wires) – Wires of the device that are going to be used for the gradient. Facilitates finding a default for
aux_wireifaux_wireisNone.mode (str) – Specifies the gradient computation mode. Accepted values are
"standard","reversed","direct","reversed-direct", or"auto". Defaults to"standard". The"auto"mode chooses the method that leads to the fewest total executions, based on the circuit observable and whether or not an auxiliary wire has been provided.
- Returns:
The transformed circuit as described in
qml.transform. Executing this circuit will provide the Jacobian in the form of a tensor, a tuple, or a nested tuple depending upon the nesting structure of measurements in the original circuit.- Return type:
qnode (QNode) or tuple[List[QuantumTape], function]
For a variational evolution \(U(\mathbf{p}) \vert 0\rangle\) with \(N\) parameters \(\mathbf{p}\), consider the expectation value of an observable \(O\):
\[f(\mathbf{p}) = \langle \hat{O} \rangle(\mathbf{p}) = \langle 0 \vert U(\mathbf{p})^\dagger \hat{O} U(\mathbf{p}) \vert 0\rangle.\]The gradient of this expectation value can be calculated via the Hadamard test gradient:
\[\frac{\partial f}{\partial \mathbf{p}} = -2 \Im[\bra{0} \hat{O} G \ket{0}] = i \left(\bra{0} \hat{O} G \ket{ 0} - \bra{0} G\hat{O} \ket{0}\right) = -2 \bra{+}\bra{0} \texttt{ctrl}\left(G^{\dagger}\right) (\hat{Y} \otimes \hat{O}) \texttt{ctrl}\left(G\right) \ket{+}\ket{0}\]Here, \(G\) is the generator of the unitary \(U\).
hadamard_gradwill work on any \(U\) so long as it has a generator \(G\) defined (i.e.,op.has_generator == True). Otherwise, it will try to decompose into gates where this is satisfied.Example
This transform can be registered directly as the quantum gradient transform to use during autodifferentiation:
>>> import jax >>> dev = qml.device("default.qubit") >>> @qml.qnode(dev, diff_method="hadamard", gradient_kwargs={"mode": "standard", "aux_wire": 1}) ... def circuit(params): ... qml.RX(params[0], wires=0) ... qml.RY(params[1], wires=0) ... qml.RX(params[2], wires=0) ... return qml.expval(qml.Z(0)), qml.probs(wires=0) >>> params = jax.numpy.array([0.1, 0.2, 0.3]) >>> jax.jacobian(circuit)(params) (Array([-0.3875172 , -0.18884787, -0.38355705], dtype=float64), Array([[-0.1937586 , -0.09442393, -0.19177853], [ 0.1937586 , 0.09442393, 0.19177853]], dtype=float64))
Usage Details
This gradient method can work with any operator that has a generator:
>>> dev = qml.device('default.qubit') >>> @qml.qnode(dev) ... def circuit(x): ... qml.evolve(qml.X(0) @ qml.X(1) + qml.Z(0) @ qml.Z(1) + qml.H(0), x ) ... return qml.expval(qml.Z(0)) ... >>> print( qml.draw(qml.gradients.hadamard_grad(circuit, aux_wire=2))(qml.numpy.array(0.5)) ) 0: ─╭Exp(-0.50j 𝓗)─╭X────┤ ╭<Z@Y> 1: ─╰Exp(-0.50j 𝓗)─│─────┤ │ 2: ──H─────────────╰●──H─┤ ╰<Z@Y> 0: ─╭Exp(-0.50j 𝓗)─╭X@X────┤ ╭<Z@Y> 1: ─╰Exp(-0.50j 𝓗)─├X@X────┤ │ 2: ──H─────────────╰●────H─┤ ╰<Z@Y> 0: ─╭Exp(-0.50j 𝓗)─╭Z────┤ ╭<Z@Y> 1: ─╰Exp(-0.50j 𝓗)─│─────┤ │ 2: ──H─────────────╰●──H─┤ ╰<Z@Y> 0: ─╭Exp(-0.50j 𝓗)─╭Z@Z────┤ ╭<Z@Y> 1: ─╰Exp(-0.50j 𝓗)─├Z@Z────┤ │ 2: ──H─────────────╰●────H─┤ ╰<Z@Y>
This gradient transform can be applied directly to
QNodeobjects. However, for performance reasons, we recommend providing the gradient transform as thediff_methodargument of the QNode decorator, and differentiating with your preferred machine learning framework.>>> dev = qml.device("default.qubit") >>> @qml.qnode(dev) ... def circuit(params): ... qml.RX(params[0], wires=0) ... qml.RY(params[1], wires=0) ... qml.RX(params[2], wires=0) ... return qml.expval(qml.Z(0)) >>> params = qml.numpy.array([0.1, 0.2, 0.3], requires_grad=True) >>> qml.gradients.hadamard_grad(circuit, mode="auto", aux_wire=1)(params) tensor([-0.3875172 , -0.18884787, -0.38355704], requires_grad=True)
This quantum gradient transform can also be applied to low-level
QuantumTapeobjects. This will result in no implicit quantum device evaluation. Instead, the processed tapes, and post-processing function, which together define the gradient are directly returned:>>> ops = [qml.RX(params[0], 0), qml.RY(params[1], 0), qml.RX(params[2], 0)] >>> measurements = [qml.expval(qml.Z(0))] >>> tape = qml.tape.QuantumTape(ops, measurements) >>> gradient_tapes, fn = qml.gradients.hadamard_grad(tape, mode="auto", aux_wire=1) >>> gradient_tapes [<QuantumScript: wires=[0, 1], params=3>, <QuantumScript: wires=[0, 1], params=3>, <QuantumScript: wires=[0, 1], params=3>]
This can be useful if the underlying circuits representing the gradient computation need to be analyzed.
Note that
argnumrefers to the index of a parameter within the list of trainable parameters. For example, if we have:>>> tape = qml.tape.QuantumScript( ... [qml.RX(1.2, wires=0), qml.RY(2.3, wires=0), qml.RZ(3.4, wires=0)], ... [qml.expval(qml.Z(0))], ... trainable_params = [1, 2] ... ) >>> qml.gradients.hadamard_grad(tape, argnum=1, mode="auto", aux_wire=1)
The code above will differentiate the third parameter rather than the second.
The output tapes can then be evaluated and post-processed to retrieve the gradient:
>>> dev = qml.device("default.qubit") >>> fn(qml.execute(gradient_tapes, dev, None)) [np.float64(-0.3875172020222171), np.float64(-0.18884787122715604), np.float64(-0.38355704238148114)]
This transform can be registered directly as the quantum gradient transform to use during autodifferentiation:
>>> dev = qml.device("default.qubit") >>> @qml.qnode(dev, interface="jax", diff_method="hadamard", gradient_kwargs={"mode": "standard", "aux_wire": 1}) ... def circuit(params): ... qml.RX(params[0], wires=0) ... qml.RY(params[1], wires=0) ... qml.RX(params[2], wires=0) ... return qml.expval(qml.Z(0)) >>> params = jax.numpy.array([0.1, 0.2, 0.3]) >>> jax.jacobian(circuit)(params) Array([-0.3875172 , -0.18884787, -0.38355705], dtype=float64)
If you use custom wires on your device, and you want to use the “standard” or “reversed” modes, you need to pass an auxiliary wire.
>>> dev_wires = ("a", "c") >>> dev = qml.device("default.qubit", wires=dev_wires) >>> gradient_kwargs = {"aux_wire": "c"} >>> @qml.qnode(dev, interface="jax", diff_method="hadamard", gradient_kwargs=gradient_kwargs) >>> def circuit(params): ... qml.RX(params[0], wires="a") ... qml.RY(params[1], wires="a") ... qml.RX(params[2], wires="a") ... return qml.expval(qml.Z("a")) >>> params = jax.numpy.array([0.1, 0.2, 0.3]) >>> jax.jacobian(circuit)(params) Array([-0.3875172 , -0.18884787, -0.38355705], dtype=float64)
Variants of the standard hadamard gradient
This gradient method has three modes that are adaptations of the standard Hadamard gradient method (these are outlined in detail in arXiv:2408.05406).
Reversed mode
With the
"reversed"mode, the observable being measured and the generators of the unitary operations in the circuit are reversed; the generators are now the observables, and the Pauli decomposition of the observables are now gates in the circuit:>>> dev = qml.device('default.qubit') >>> @qml.qnode(dev) ... def circuit(x): ... qml.evolve(qml.X(0) @ qml.X(1) + qml.Z(0) @ qml.Z(1) + qml.H(0), x) ... return qml.expval(qml.Z(0)) ... >>> grad = qml.gradients.hadamard_grad(circuit, mode='reversed', aux_wire=2) >>> print(qml.draw(grad)(qml.numpy.array(0.5))) 0: ─╭Exp(-0.50j 𝓗)─╭Z────┤ ╭<(-1.00*𝓗)@Y> 1: ─╰Exp(-0.50j 𝓗)─│─────┤ ├<(-1.00*𝓗)@Y> 2: ──H─────────────╰●──H─┤ ╰<(-1.00*𝓗)@Y>
Direct mode
With the
"direct"mode, the additional auxiliary qubit needed in the standard Hadamard gradient is exchanged for additional circuit executions:>>> grad = qml.gradients.hadamard_grad(circuit, mode='direct') >>> print(qml.draw(grad)(qml.numpy.array(0.5))) 0: ─╭Exp(-0.50j 𝓗)──Exp(-0.79j X)─┤ <Z> 1: ─╰Exp(-0.50j 𝓗)────────────────┤ 0: ─╭Exp(-0.50j 𝓗)──Exp(0.79j X)─┤ <Z> 1: ─╰Exp(-0.50j 𝓗)───────────────┤ 0: ─╭Exp(-0.50j 𝓗)─╭Exp(-0.79j X@X)─┤ <Z> 1: ─╰Exp(-0.50j 𝓗)─╰Exp(-0.79j X@X)─┤ 0: ─╭Exp(-0.50j 𝓗)─╭Exp(0.79j X@X)─┤ <Z> 1: ─╰Exp(-0.50j 𝓗)─╰Exp(0.79j X@X)─┤ 0: ─╭Exp(-0.50j 𝓗)──Exp(-0.79j Z)─┤ <Z> 1: ─╰Exp(-0.50j 𝓗)────────────────┤ 0: ─╭Exp(-0.50j 𝓗)──Exp(0.79j Z)─┤ <Z> 1: ─╰Exp(-0.50j 𝓗)───────────────┤ 0: ─╭Exp(-0.50j 𝓗)─╭Exp(-0.79j Z@Z)─┤ <Z> 1: ─╰Exp(-0.50j 𝓗)─╰Exp(-0.79j Z@Z)─┤ 0: ─╭Exp(-0.50j 𝓗)─╭Exp(0.79j Z@Z)─┤ <Z> 1: ─╰Exp(-0.50j 𝓗)─╰Exp(0.79j Z@Z)─┤
Reversed direct mode
The
"reversed-direct"mode is a combination of the"direct"and"reversed"modes, where the role of the observable and the generators of the unitary operations in the circuit swap, and the additional auxiliary qubit is exchanged for additional circuit executions:>>> grad = qml.gradients.hadamard_grad(circuit, mode='reversed-direct') >>> print(qml.draw(grad)(qml.numpy.array(0.5))) 0: ─╭Exp(-0.50j 𝓗)──Exp(-0.79j Z)─┤ ╭<-1.00*𝓗> 1: ─╰Exp(-0.50j 𝓗)────────────────┤ ╰<-1.00*𝓗> 0: ─╭Exp(-0.50j 𝓗)──Exp(0.79j Z)─┤ ╭<-1.00*𝓗> 1: ─╰Exp(-0.50j 𝓗)───────────────┤ ╰<-1.00*𝓗>
Auto mode
Using auto mode will result in an automatic selection of the method which results in the fewest total executions, given the wires available. Any auxiliary wires must be provided explicitly. This method takes into account the number of observables and the number of generators involved in each problem to choose whether the standard or reversed order is preferred. It also takes into account whether we have one or multiple measurements, and whether we have an auxiliary wire.
Auxiliary Wire
Standard Order
Method
False
True
Direct Hadamard test
False
False
Reversed direct Hadamard test
True
True
Hadamard test
True
False
Reversed Hadamard test
i.e. in the below, the direct method is automatically selected. We can verify that it is the most efficient choice. We don’t supply an auxilliary wire, so we are choosing between
directandreversed-directmodes.>>> dev = qml.device('default.qubit') >>> @qml.qnode(dev) ... def circuit(x): ... qml.evolve(qml.X(0) @ qml.X(1), x) ... return qml.expval(qml.Z(0) @ qml.Z(1) + qml.Y(0)) >>> grad = qml.gradients.hadamard_grad(circuit, mode='auto') >>> print(qml.draw(grad)(qml.numpy.array(0.5))) 0: ─╭Exp(-0.50j X@X)─╭Exp(-0.79j X@X)─┤ ╭<𝓗> 1: ─╰Exp(-0.50j X@X)─╰Exp(-0.79j X@X)─┤ ╰<𝓗> 0: ─╭Exp(-0.50j X@X)─╭Exp(0.79j X@X)─┤ ╭<𝓗> 1: ─╰Exp(-0.50j X@X)─╰Exp(0.79j X@X)─┤ ╰<𝓗>
>>> grad = qml.gradients.hadamard_grad(circuit, mode='reversed-direct') >>> print(qml.draw(grad)(qml.numpy.array(0.5))) 0: ─╭Exp(-0.50j X@X)─╭Exp(-0.79j Z@Z)─┤ ╭<-1.00*X@X> 1: ─╰Exp(-0.50j X@X)─╰Exp(-0.79j Z@Z)─┤ ╰<-1.00*X@X> 0: ─╭Exp(-0.50j X@X)─╭Exp(0.79j Z@Z)─┤ ╭<-1.00*X@X> 1: ─╰Exp(-0.50j X@X)─╰Exp(0.79j Z@Z)─┤ ╰<-1.00*X@X> 0: ─╭Exp(-0.50j X@X)──Exp(-0.79j Y)─┤ ╭<-1.00*X@X> 1: ─╰Exp(-0.50j X@X)────────────────┤ ╰<-1.00*X@X> 0: ─╭Exp(-0.50j X@X)──Exp(0.79j Y)─┤ ╭<-1.00*X@X> 1: ─╰Exp(-0.50j X@X)───────────────┤ ╰<-1.00*X@X>