qml.execute¶
-
execute
(tapes, device, gradient_fn=None, interface='auto', transform_program=None, inner_transform=None, config=None, grad_on_execution='best', gradient_kwargs=None, cache=True, cachesize=10000, max_diff=1, override_shots=<UnsetType.UNSET: 'UNSET'>, expand_fn=<UnsetType.UNSET: 'UNSET'>, max_expansion=None, device_batch_transform=None, device_vjp=False, mcm_config=None)[source]¶ New function to execute a batch of tapes on a device in an autodifferentiable-compatible manner. More cases will be added, during the project. The current version is supporting forward execution for NumPy and does not support shot vectors.
- Parameters
tapes (Sequence[QuantumTape]) – batch of tapes to execute
device (pennylane.Device) – Device to use to execute the batch of tapes. If the device does not provide a
batch_execute
method, by default the tapes will be executed in serial.gradient_fn (None or callable) – The gradient transform function to use for backward passes. If “device”, the device will be queried directly for the gradient (if supported).
interface (str) – The interface that will be used for classical autodifferentiation. This affects the types of parameters that can exist on the input tapes. Available options include
autograd
,torch
,tf
,jax
andauto
.transform_program (TransformProgram) – A transform program to be applied to the initial tape.
inner_transform (TransformProgram) – A transform program to be applied to the tapes in inner execution, inside the ml interface.
config (qml.devices.ExecutionConfig) – A datastructure describing the parameters needed to fully describe the execution.
grad_on_execution (bool, str) – Whether the gradients should be computed on the execution or not. Only applies if the device is queried for the gradient; gradient transform functions available in
qml.gradients
are only supported on the backward pass. The ‘best’ option chooses automatically between the two options and is default.gradient_kwargs (dict) – dictionary of keyword arguments to pass when determining the gradients of tapes
cache (None, bool, dict, Cache) – Whether to cache evaluations. This can result in a significant reduction in quantum evaluations during gradient computations.
cachesize (int) – the size of the cache
max_diff (int) – If
gradient_fn
is a gradient transform, this option specifies the maximum number of derivatives to support. Increasing this value allows for higher order derivatives to be extracted, at the cost of additional (classical) computational overhead during the backwards pass.override_shots (int) – The number of shots to use for the execution. If
False
, then the number of shots on the device is used.expand_fn (str, function) – Tape expansion function to be called prior to device execution. Must have signature of the form
expand_fn(tape, max_expansion)
, and return a singleQuantumTape
. If not provided, by defaultDevice.expand_fn()
is called.max_expansion (int) – The number of times the internal circuit should be expanded when executed on a device. Expansion occurs when an operation or measurement is not supported, and results in a gate decomposition. If any operations in the decomposition remain unsupported by the device, another expansion occurs.
device_batch_transform (bool) – Whether to apply any batch transforms defined by the device (within
Device.batch_transform()
) to each tape to be executed. The default behaviour of the device batch transform is to expand out Hamiltonian measurements into constituent terms if not supported on the device.device_vjp=False (Optional[bool]) – whether or not to use the device provided jacobian product if it is available.
mcm_config (dict) – Dictionary containing configuration options for handling mid-circuit measurements.
- Returns
A nested list of tape results. Each element in the returned list corresponds in order to the provided tapes.
- Return type
list[tensor_like[float]]
Warning
The following arguments are deprecated and will be removed in version 0.39:
expand_fn
,max_expansion
, anddevice_batch_transform
. Instead, please create aTransformProgram
with the desired preprocessing and pass it to thetransform_program
argument. For instance, we can create a program that uses theqml.devices.preprocess.decompose
transform with the desired expansion level and pass it to theqml.execute
function:from pennylane.devices.preprocess import decompose from pennylane.transforms.core import TransformProgram def stopping_condition(obj): return obj.name in {"CNOT", "RX", "RZ"} tape = qml.tape.QuantumScript([qml.IsingXX(1.2, wires=(0,1))], [qml.expval(qml.Z(0))]) program = TransformProgram() program.add_transform( decompose, stopping_condition=stopping_condition, max_expansion=10, ) dev = qml.device("default.qubit", wires=2)
>>> qml.execute([tape], dev, transform_program=program) (0.36235775447667357,)
Warning
The
override_shots
argument is deprecated and will be removed in version 0.39. Instead, please add the shots to theQuantumTape
’s to be executed. For instance:dev = qml.device("default.qubit", wires=1) operations = [qml.PauliX(0)] measurements = [qml.expval(qml.PauliZ(0))] qs = qml.tape.QuantumTape(operations, measurements, shots=100)
>>> qml.execute([qs], dev) (-1.0,)
Example
Consider the following cost function:
dev = qml.device("lightning.qubit", wires=2) def cost_fn(params, x): ops1 = [qml.RX(params[0], wires=0), qml.RY(params[1], wires=0)] measurements1 = [qml.expval(qml.Z(0))] tape1 = qml.tape.QuantumTape(ops1, measurements1) ops2 = [ qml.RX(params[2], wires=0), qml.RY(x[0], wires=1), qml.CNOT(wires=(0,1)) ] measurements2 = [qml.probs(wires=0)] tape2 = qml.tape.QuantumTape(ops2, measurements2) tapes = [tape1, tape2] # execute both tapes in a batch on the given device res = qml.execute(tapes, dev, gradient_fn=qml.gradients.param_shift, max_diff=2) return res[0] + res[1][0] - res[1][1]
In this cost function, two independent quantum tapes are being constructed; one returning an expectation value, the other probabilities. We then batch execute the two tapes, and reduce the results to obtain a scalar.
Let’s execute this cost function while tracking the gradient:
>>> params = np.array([0.1, 0.2, 0.3], requires_grad=True) >>> x = np.array([0.5], requires_grad=True) >>> cost_fn(params, x) 1.93050682
Since the
execute
function is differentiable, we can also compute the gradient:>>> qml.grad(cost_fn)(params, x) (array([-0.0978434 , -0.19767681, -0.29552021]), array([5.37764278e-17]))
Finally, we can also compute any nth-order derivative. Let’s compute the Jacobian of the gradient (that is, the Hessian):
>>> x.requires_grad = False >>> qml.jacobian(qml.grad(cost_fn))(params, x) array([[-0.97517033, 0.01983384, 0. ], [ 0.01983384, -0.97517033, 0. ], [ 0. , 0. , -0.95533649]])