qml.qnn.TorchLayer

class TorchLayer(qnode, weight_shapes, init_method=None)[source]

Bases: torch.nn.modules.module.Module

Converts a QNode to a Torch layer.

The result can be used within the torch.nn Sequential or Module classes for creating quantum and hybrid models.

Parameters
  • qnode (qml.QNode) – the PennyLane QNode to be converted into a Torch layer

  • weight_shapes (dict[str, tuple]) – a dictionary mapping from all weights used in the QNode to their corresponding shapes

  • init_method (Union[Callable, Dict[str, Union[Callable, torch.Tensor]], None]) – Either a torch.nn.init function for initializing all QNode weights or a dictionary specifying the callable/value used for each weight. If not specified, weights are randomly initialized using the uniform distribution over \([0, 2 \pi]\).

Example

First let’s define the QNode that we want to convert into a Torch layer:

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights_0, weight_1):
    qml.RX(inputs[0], wires=0)
    qml.RX(inputs[1], wires=1)
    qml.Rot(*weights_0, wires=0)
    qml.RY(weight_1, wires=1)
    qml.CNOT(wires=[0, 1])
    return qml.expval(qml.Z(0)), qml.expval(qml.Z(1))

The signature of the QNode must contain an inputs named argument for input data, with all other arguments to be treated as internal weights. We can then convert to a Torch layer with:

>>> weight_shapes = {"weights_0": 3, "weight_1": 1}
>>> qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)

The internal weights of the QNode are automatically initialized within the TorchLayer and must have their shapes specified in a weight_shapes dictionary. It is then easy to combine with other neural network layers from the torch.nn module and create a hybrid:

>>> clayer = torch.nn.Linear(2, 2)
>>> model = torch.nn.Sequential(qlayer, clayer)

QNode signature

The QNode must have a signature that satisfies the following conditions:

  • Contain an inputs named argument for input data.

  • All other arguments must accept an array or tensor and are treated as internal weights of the QNode.

  • All other arguments must have no default value.

  • The inputs argument is permitted to have a default value provided the gradient with respect to inputs is not required.

  • There cannot be a variable number of positional or keyword arguments, e.g., no *args or **kwargs present in the signature.

Output shape

If the QNode returns a single measurement, then the output of the TorchLayer will have shape (batch_dim, *measurement_shape), where measurement_shape is the output shape of the measurement:

def print_output_shape(measurements):
    n_qubits = 2
    dev = qml.device("default.qubit", wires=n_qubits, shots=100)

    @qml.qnode(dev)
    def qnode(inputs, weights):
        qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
        qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
        if len(measurements) == 1:
            return qml.apply(measurements[0])
        return [qml.apply(m) for m in measurements]

    weight_shapes = {"weights": (3, n_qubits, 3)}
    qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)

    batch_dim = 5
    x = torch.zeros((batch_dim, n_qubits))
    return qlayer(x).shape
>>> print_output_shape([qml.expval(qml.Z(0))])
torch.Size([5])
>>> print_output_shape([qml.probs(wires=[0, 1])])
torch.Size([5, 4])
>>> print_output_shape([qml.sample(wires=[0, 1])])
torch.Size([5, 100, 2])

If the QNode returns multiple measurements, then the measurement results will be flattened and concatenated, resulting in an output of shape (batch_dim, total_flattened_dim):

>>> print_output_shape([qml.expval(qml.Z(0)), qml.probs(wires=[0, 1])])
torch.Size([5, 5])
>>> print_output_shape([qml.probs([0, 1]), qml.sample(wires=[0, 1])])
torch.Size([5, 204])

Initializing weights

If init_method is not specified, weights are randomly initialized from the uniform distribution on the interval \([0, 2 \pi]\).

Alternative a): The optional init_method argument of TorchLayer allows for the initialization method of the QNode weights to be specified. The function passed to the argument must be from the torch.nn.init module. For example, weights can be randomly initialized from the normal distribution by passing:

init_method = torch.nn.init.normal_

Alternative b): Two dictionaries weight_shapes and init_method are passed, whose keys match the args of the qnode.

@qml.qnode(dev)
def qnode(inputs, weights_0, weights_1, weights_2, weight_3, weight_4):
    qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.StronglyEntanglingLayers(weights_0, wires=range(n_qubits))
    qml.templates.BasicEntanglerLayers(weights_1, wires=range(n_qubits))
    qml.Rot(*weights_2, wires=0)
    qml.RY(weight_3, wires=1)
    qml.RZ(weight_4, wires=1)
    qml.CNOT(wires=[0, 1])
    return qml.expval(qml.Z(0)), qml.expval(qml.Z(1))


weight_shapes = {
    "weights_0": (3, n_qubits, 3),
    "weights_1": (3, n_qubits),
    "weights_2": 3,
    "weight_3": 1,
    "weight_4": (1,),
}

init_method = {
    "weights_0": torch.nn.init.normal_,
    "weights_1": torch.nn.init.uniform_,
    "weights_2": torch.tensor([1., 2., 3.]),
    "weight_3": torch.tensor(1.),  # scalar when shape is not an iterable and is <= 1
    "weight_4": torch.tensor([1.]),
}

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes=weight_shapes, init_method=init_method)

Model saving

Instances of TorchLayer can be saved using the usual torch.save() utility:

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes=weight_shapes)
torch.save(qlayer.state_dict(), SAVE_PATH)

To load the layer again, an instance of the class must be created first before calling torch.load(), as required by PyTorch:

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes=weight_shapes)
qlayer.load_state_dict(torch.load(SAVE_PATH))
qlayer.eval()

Note

Currently TorchLayer objects cannot be saved using the torch.save(qlayer, SAVE_PATH) syntax. In order to save a TorchLayer object, the object’s state_dict should be saved instead.

PyTorch modules that contain TorchLayer objects can also be saved and loaded.

Saving:

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes=weight_shapes)
clayer = torch.nn.Linear(2, 2)
model = torch.nn.Sequential(qlayer, clayer)
torch.save(model.state_dict(), SAVE_PATH)

Loading:

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes=weight_shapes)
clayer = torch.nn.Linear(2, 2)
model = torch.nn.Sequential(qlayer, clayer)
model.load_state_dict(torch.load(SAVE_PATH))
model.eval()

Full code example

The code block below shows how a circuit composed of templates from the Templates module can be combined with classical Linear layers to learn the two-dimensional moons dataset.

import numpy as np
import pennylane as qml
import torch
import sklearn.datasets

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
    return [qml.expval(qml.Z(0)), qml.expval(qml.Z(1))]

weight_shapes = {"weights": (3, n_qubits, 3)}

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
clayer1 = torch.nn.Linear(2, 2)
clayer2 = torch.nn.Linear(2, 2)
softmax = torch.nn.Softmax(dim=1)
model = torch.nn.Sequential(clayer1, qlayer, clayer2, softmax)

samples = 100
x, y = sklearn.datasets.make_moons(samples)
y_hot = np.zeros((samples, 2))
y_hot[np.arange(samples), y] = 1

X = torch.tensor(x).float()
Y = torch.tensor(y_hot).float()

opt = torch.optim.SGD(model.parameters(), lr=0.5)
loss = torch.nn.L1Loss()

The model can be trained using:

epochs = 8
batch_size = 5
batches = samples // batch_size

data_loader = torch.utils.data.DataLoader(list(zip(X, Y)), batch_size=batch_size,
                                          shuffle=True, drop_last=True)

for epoch in range(epochs):

    running_loss = 0

    for x, y in data_loader:
        opt.zero_grad()

        loss_evaluated = loss(model(x), y)
        loss_evaluated.backward()

        opt.step()

        running_loss += loss_evaluated

    avg_loss = running_loss / batches
    print("Average loss over epoch {}: {:.4f}".format(epoch + 1, avg_loss))

An example output is shown below:

Average loss over epoch 1: 0.5089
Average loss over epoch 2: 0.4765
Average loss over epoch 3: 0.2710
Average loss over epoch 4: 0.1865
Average loss over epoch 5: 0.1670
Average loss over epoch 6: 0.1635
Average loss over epoch 7: 0.1528
Average loss over epoch 8: 0.1528

T_destination

alias of TypeVar('T_destination', bound=Dict[str, Any])

dump_patches

input_arg

Name of the argument to be used as the input to the Torch layer.

input_arg

Name of the argument to be used as the input to the Torch layer. Set to "inputs".

training

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

construct(args, kwargs)

Constructs the wrapped QNode on input data using the initialized weights.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(inputs)

Evaluates a forward pass through the QNode based upon input data and the initialized weights.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module's state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Moves all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

set_input_argument([input_name])

Set the name of the input argument.

share_memory()

See torch.Tensor.share_memory_()

state_dict(*args[, destination, prefix, ...])

Returns a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

construct(args, kwargs)[source]

Constructs the wrapped QNode on input data using the initialized weights.

This method was added to match the QNode interface. The provided args must contain a single item, which is the input to the layer. The provided kwargs is unused.

Parameters
  • args (tuple) – A tuple containing one entry that is the input to this layer

  • kwargs (dict) – Unused

forward(inputs)[source]

Evaluates a forward pass through the QNode based upon input data and the initialized weights.

Parameters

inputs (tensor) – data to be processed

Returns

output data

Return type

tensor

static set_input_argument(input_name='inputs')[source]

Set the name of the input argument.

Parameters

input_name (str) – Name of the input argument