qml.gradients.compute_jvp_single¶
- compute_jvp_single(tangent, jac)[source]¶
Convenience function to compute the Jacobian vector product for a given tangent vector and a Jacobian for a single measurement tape.
- Parameters:
tangent (list, tensor_like) – tangent vector
jac (tensor_like, tuple) – Jacobian matrix
- Returns:
the Jacobian vector product
- Return type:
tensor_like
Examples
We start with a number of examples. A more complete, technical description is given further below.
For a single parameter and a single measurement without shape (e.g.
expval,var):
>>> tangent = np.array([1.0]) >>> jac = np.array(0.2) >>> qml.gradients.compute_jvp_single(tangent, jac) array(0.2)
For a single parameter and a single measurement with shape (e.g.
probs):
>>> tangent = np.array([2.0]) >>> jac = np.array([0.3, 0.4]) >>> qml.gradients.compute_jvp_single(tangent, jac) array([0.6, 0.8])
For multiple parameters (in this case 2 parameters) and a single measurement without shape (e.g.
expval,var):
>>> tangent = np.array([1.0, 2.0]) >>> jac = tuple([np.array(0.1), np.array(0.2)]) >>> qml.gradients.compute_jvp_single(tangent, jac) array(0.5)
For multiple parameters (in this case 2 parameters) and a single measurement with shape (e.g.
probs):
>>> tangent = np.array([1.0, 0.5]) >>> jac = tuple([np.array([0.1, 0.3]), np.array([0.2, 0.4])]) >>> qml.gradients.compute_jvp_single(tangent, jac) array([0.2, 0.5])
Technical description
There are multiple case distinctions in this function, for particular examples see above.
The JVP may be for one (A) or multiple (B) parameters. We call the number of parameters
kThe number
Rof tape return type dimensions may be between 0 and 3. We call the return type dimensionsr_jEach parameter may have an arbitrary number
L_i>=0of dimensions
In the following,
(a, b)denotes a tensor_like of shape(a, b)and[(a,), (b,)]/((a,), (b,))denotes alist/tupleof tensors with the indicated shapes, respectively. Ignore the case of no trainable parameters, as it is filtered out in advance.For scenario (A), the input shapes can be in
tangentshapejacshapeComment
(1,)or[()]or(())()scalar return, scalar parameter
(1,)or[()]or(())(r_1,..,r_R)tensor return, scalar parameter
[(l_1,..,l_{L_1})][1](l_1,..,l_{L_1})scalar return, tensor parameter
[(l_1,..,l_{L_1})][1](r_1,..,r_R, l_1,..,l_{L_1})tensor return, tensor parameter
[1] Note that intuitively,
tangentcould be allowed to be a tensor of shape(l_1,..,l_{L_1})without an outer list. However, this is excluded in order to allow for the distinction from scenario (B). Internally, this input shape fortangentnever occurs for scenario (A).In this scenario, the tangent is reshaped into a one-dimensional tensor with shape
(tangent_size,)and the Jacobian is reshaped to have the dimensions(r_1, ... r_R, tangent_size). This is followed by atensordotcontraction over thetangent_sizeaxis of both tensors.For scenario (B), the input shapes can be in
tangentshapejacshapeComment
(k,)or[(),..,()]or((),..,())((),..,())(lengthk)scalar return,
kscalar parameters(k,)or[(),..,()]or((),..,())((r_1,..,r_R),..,(r_1,..,r_R))[1]tensor return,
kscalar parameters[(l_1,..,l_{L_1}),..,(l_1,..,l_{L_k})]((l_1,..,l_{L_1}),..,(l_1,..,l_{L_k}))scalar return,
ktensor parameters[(l_1,..,l_{L_1}),..,(l_1,..,l_{L_k})]((r_1,..,r_R, l_1,..,l_{L_1}),..,(r_1,..,r_R, l_1,..,l_{L_k}))[1]tensor return,
ktensor parameters[1] Note that the return type dimensions
(r_1,..,r_R)are the same for all entries ofjac, whereas the dimensions of the entries intanget, and the according dimensions(l_1,..,l_{L_k})of thejacentries may differ.In this scenario, another case separation is used: If any of the parameters is a tensor (i.e. not a scalar), all tangent entries are reshaped into one-dimensional tensors with shapes
(tangent_size_i,)and then stacked into one one-dimensional tensor. If there are no tensor parameters, the tangent is just stacked and reshaped. The Jacobians are reshaped to have the dimensions(r_1, ... r_R, tangent_size_i)and then are concatenated along their last (potentially mismatching) axis. This is followed by a tensordot contraction over the axes of size \(\sum_i\)tangent_size_i.