qml.execute¶
-
execute
(tapes, device, gradient_fn=None, interface='auto', grad_on_execution='best', gradient_kwargs=None, cache=True, cachesize=10000, max_diff=1, override_shots=False, expand_fn='device', max_expansion=10, device_batch_transform=True)[source]¶ New function to execute a batch of tapes on a device in an autodifferentiable-compatible manner. More cases will be added, during the project. The current version is supporting forward execution for Numpy and does not support shot vectors.
- Parameters
tapes (Sequence[QuantumTape]) – batch of tapes to execute
device (pennylane.Device) – Device to use to execute the batch of tapes. If the device does not provide a
batch_execute
method, by default the tapes will be executed in serial.gradient_fn (None or callable) – The gradient transform function to use for backward passes. If “device”, the device will be queried directly for the gradient (if supported).
interface (str) – The interface that will be used for classical autodifferentiation. This affects the types of parameters that can exist on the input tapes. Available options include
autograd
,torch
,tf
,jax
andauto
.grad_on_execution (bool, str) – Whether the gradients should be computed on the execution or not. Only applies if the device is queried for the gradient; gradient transform functions available in
qml.gradients
are only supported on the backward pass. The ‘best’ option chooses automatically between the two options and is default.gradient_kwargs (dict) – dictionary of keyword arguments to pass when determining the gradients of tapes
cache (bool) – Whether to cache evaluations. This can result in a significant reduction in quantum evaluations during gradient computations.
cachesize (int) – the size of the cache
max_diff (int) – If
gradient_fn
is a gradient transform, this option specifies the maximum number of derivatives to support. Increasing this value allows for higher order derivatives to be extracted, at the cost of additional (classical) computational overhead during the backwards pass.override_shots (int) – The number of shots to use for the execution. If
False
, then the number of shots on the device is used.expand_fn (function) – Tape expansion function to be called prior to device execution. Must have signature of the form
expand_fn(tape, max_expansion)
, and return a singleQuantumTape
. If not provided, by defaultDevice.expand_fn()
is called.max_expansion (int) – The number of times the internal circuit should be expanded when executed on a device. Expansion occurs when an operation or measurement is not supported, and results in a gate decomposition. If any operations in the decomposition remain unsupported by the device, another expansion occurs.
device_batch_transform (bool) – Whether to apply any batch transforms defined by the device (within
Device.batch_transform()
) to each tape to be executed. The default behaviour of the device batch transform is to expand out Hamiltonian measurements into constituent terms if not supported on the device.
- Returns
A nested list of tape results. Each element in the returned list corresponds in order to the provided tapes.
- Return type
list[tensor_like[float]]
Example
Consider the following cost function:
dev = qml.device("lightning.qubit", wires=2) def cost_fn(params, x): with qml.tape.QuantumTape() as tape1: qml.RX(params[0], wires=0) qml.RY(params[1], wires=0) qml.expval(qml.PauliZ(0)) with qml.tape.QuantumTape() as tape2: qml.RX(params[2], wires=0) qml.RY(x[0], wires=1) qml.CNOT(wires=[0, 1]) qml.probs(wires=0) tapes = [tape1, tape2] # execute both tapes in a batch on the given device res = qml.execute(tapes, dev, gradient_fn=qml.gradients.param_shift, max_diff=2) return res[0] + res[1][0] - res[1][1]
In this cost function, two independent quantum tapes are being constructed; one returning an expectation value, the other probabilities. We then batch execute the two tapes, and reduce the results to obtain a scalar.
Let’s execute this cost function while tracking the gradient:
>>> params = np.array([0.1, 0.2, 0.3], requires_grad=True) >>> x = np.array([0.5], requires_grad=True) >>> cost_fn(params, x) 1.93050682
Since the
execute
function is differentiable, we can also compute the gradient:>>> qml.grad(cost_fn)(params, x) (array([-0.0978434 , -0.19767681, -0.29552021]), array([5.37764278e-17]))
Finally, we can also compute any nth-order derivative. Let’s compute the Jacobian of the gradient (that is, the Hessian):
>>> x.requires_grad = False >>> qml.jacobian(qml.grad(cost_fn))(params, x) array([[-0.97517033, 0.01983384, 0. ], [ 0.01983384, -0.97517033, 0. ], [ 0. , 0. , -0.95533649]])