qml.interfaces.torch.ExecuteTapes¶
-
class
ExecuteTapes
(*args, **kwargs)[source]¶ Bases:
torch.autograd.function.Function
The signature of this
torch.autograd.Function
is designed to work around Torch restrictions.In particular,
torch.autograd.Function
:Cannot accept keyword arguments. As a result, we pass a dictionary as the first argument
kwargs
. This dictionary must contain:"tapes"
: the quantum tapes to batch evaluate"device"
: the quantum device to use to evaluate the tapes"execute_fn"
: the execution function to use on forward passes"gradient_fn"
: the gradient transform function to use for backward passes"gradient_kwargs"
: gradient keyword arguments to pass to the gradient function"max_diff
: the maximum order of derivatives to support
Further, note that the
parameters
argument is dependent on thetapes
; this function should always be called with the parameters extracted directly from the tapes as follows:>>> parameters = [] >>> [parameters.extend(t.get_parameters()) for t in tapes] >>> kwargs = {"tapes": tapes, "device": device, "gradient_fn": gradient_fn, ...} >>> ExecuteTapes.apply(kwargs, *parameters)
The private argument
_n
is used to track nesting of derivatives, for example if the nth-order derivative is requested. Do not set this argument unless you understand the consequences!Attributes
-
dirty_tensors
¶
-
is_traceable
= False¶
-
materialize_grads
¶
-
metadata
¶
-
needs_input_grad
¶
-
next_functions
¶
-
non_differentiable
¶
-
requires_grad
¶
-
saved_tensors
¶
-
saved_variables
¶
-
to_save
¶
Methods
apply
()backward
(*flat_grad_outputs)Defines a formula for differentiating the operation.
forward
(out_struct_holder, *inp)Performs the operation.
mark_dirty
(*args)Marks given tensors as modified in an in-place operation.
mark_non_differentiable
(*args)Marks outputs as non-differentiable.
mark_shared_storage
(*pairs)save_for_backward
(*tensors)Saves given tensors for a future call to
backward()
.set_materialize_grads
(value)Sets whether to materialize output grad tensors.
-
apply
()¶
-
backward
(*flat_grad_outputs)[source]¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
forward
(out_struct_holder, *inp)[source]¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store arbitrary data that can be then retrieved during the backward pass.
-
mark_dirty
(*args)¶ Marks given tensors as modified in an in-place operation.
This should be called at most once, only from inside the
forward()
method, and all arguments should be inputs.Every tensor that’s been modified in-place in a call to
forward()
should be given to this function, to ensure correctness of our checks. It doesn’t matter whether the function is called before or after modification.
-
mark_non_differentiable
(*args)¶ Marks outputs as non-differentiable.
This should be called at most once, only from inside the
forward()
method, and all arguments should be outputs.This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in
backward()
, but it’s always going to be a zero tensor with the same shape as the shape of a corresponding output.This is used e.g. for indices returned from a max
Function
.
-
name
()¶
-
register_hook
()¶
-
save_for_backward
(*tensors)¶ Saves given tensors for a future call to
backward()
.This should be called at most once, and only from inside the
forward()
method.Later, saved tensors can be accessed through the
saved_tensors
attribute. Before returning them to the user, a check is made to ensure they weren’t used in any in-place operation that modified their content.Arguments can also be
None
.
-
set_materialize_grads
(value)¶ Sets whether to materialize output grad tensors. Default is true.
This should be called only from inside the
forward()
methodIf true, undefined output grad tensors will be expanded to tensors full of zeros prior to calling the
backward()
method.