qml.math.requires_grad¶
-
requires_grad
(tensor, interface=None)[source]¶ Returns True if the tensor is considered trainable.
Warning
The implementation depends on the contained tensor type, and may be context dependent.
For example, Torch tensors and PennyLane tensors track trainability as a property of the tensor itself. TensorFlow, on the other hand, only tracks trainability if being watched by a gradient tape.
- Parameters
tensor (tensor_like) – input tensor
interface (str) – The name of the interface. Will be determined automatically if not provided.
Example
Calling this function on a PennyLane NumPy array:
>>> x = np.array([1., 5.], requires_grad=True) >>> requires_grad(x) True >>> x.requires_grad = False >>> requires_grad(x) False
PyTorch has similar behaviour.
With TensorFlow, the output is dependent on whether the tensor is currently being watched by a gradient tape:
>>> x = tf.Variable([0.6, 0.1]) >>> requires_grad(x) False >>> with tf.GradientTape() as tape: ... print(requires_grad(x)) True
While TensorFlow constants are by default not trainable, they can be manually watched by the gradient tape:
>>> x = tf.constant([0.6, 0.1]) >>> with tf.GradientTape() as tape: ... print(requires_grad(x)) False >>> with tf.GradientTape() as tape: ... tape.watch([x]) ... print(requires_grad(x)) True