qml.workflow.interfaces.torch.ExecuteTapes

class ExecuteTapes(*args, **kwargs)[source]

Bases: torch.autograd.function.Function

The signature of this torch.autograd.Function is designed to work around Torch restrictions.

In particular, torch.autograd.Function:

  • Cannot accept keyword arguments. As a result, we pass a dictionary as the first argument kwargs. This dictionary must contain:

    • "tapes": the quantum tapes to batch evaluate

    • "execute_fn": a function that calculates the results of the tapes

    • "jpc": a JacobianProductCalculator that can compute the vjp.

Further, note that the parameters argument is dependent on the tapes; this function should always be called with the parameters extracted directly from the tapes as follows:

>>> parameters = [p for t in tapes for p in t.get_parameters()]
>>> kwargs = {"tapes": tapes, "execute_fn": execute_fn, "jpc": jpc}
>>> ExecuteTapes.apply(kwargs, *parameters)

dirty_tensors

is_traceable

materialize_grads

metadata

needs_input_grad

next_functions

non_differentiable

requires_grad

saved_tensors

saved_variables

to_save

dirty_tensors
is_traceable = False
materialize_grads
metadata
needs_input_grad
next_functions
non_differentiable
requires_grad
saved_tensors
saved_variables
to_save

apply()

backward(*flat_grad_outputs)

Defines a formula for differentiating the operation.

forward(out_struct_holder, *inp)

Performs the operation.

mark_dirty(*args)

Marks given tensors as modified in an in-place operation.

mark_non_differentiable(*args)

Marks outputs as non-differentiable.

mark_shared_storage(*pairs)

name

register_hook

save_for_backward(*tensors)

Saves given tensors for a future call to backward().

set_materialize_grads(value)

Sets whether to materialize output grad tensors.

apply()
backward(*flat_grad_outputs)[source]

Defines a formula for differentiating the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

forward(out_struct_holder, *inp)[source]

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass.

mark_dirty(*args)

Marks given tensors as modified in an in-place operation.

This should be called at most once, only from inside the forward() method, and all arguments should be inputs.

Every tensor that’s been modified in-place in a call to forward() should be given to this function, to ensure correctness of our checks. It doesn’t matter whether the function is called before or after modification.

mark_non_differentiable(*args)

Marks outputs as non-differentiable.

This should be called at most once, only from inside the forward() method, and all arguments should be outputs.

This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in backward(), but it’s always going to be a zero tensor with the same shape as the shape of a corresponding output.

This is used e.g. for indices returned from a max Function.

mark_shared_storage(*pairs)
name()
register_hook()
save_for_backward(*tensors)

Saves given tensors for a future call to backward().

This should be called at most once, and only from inside the forward() method.

Later, saved tensors can be accessed through the saved_tensors attribute. Before returning them to the user, a check is made to ensure they weren’t used in any in-place operation that modified their content.

Arguments can also be None.

set_materialize_grads(value)

Sets whether to materialize output grad tensors. Default is true.

This should be called only from inside the forward() method

If true, undefined output grad tensors will be expanded to tensors full of zeros prior to calling the backward() method.