qml.qnn.KerasLayer¶
- class KerasLayer(*args, **kwargs)[source]¶
Bases:
keras.src.layers.layer.Layer
Converts a
QNode
to a Keras Layer.The result can be used within the Keras Sequential or Model classes for creating quantum and hybrid models.
Warning
This class is deprecated because Keras 2 is no longer actively maintained. Please consider using torch instead of TensorFlow/Keras 2.
Note
KerasLayer
currently only supports Keras 2. If you are running the newest version of TensorFlow and Keras, you may automatically be using Keras 3. For instructions on running with Keras 2, instead, see the documentation on backwards compatibility .- Parameters
qnode (qml.QNode) – the PennyLane QNode to be converted into a Keras Layer
weight_shapes (dict[str, tuple]) – a dictionary mapping from all weights used in the QNode to their corresponding shapes
output_dim (int) – the output dimension of the QNode
weight_specs (dict[str, dict]) – An optional dictionary for users to provide additional specifications for weights used in the QNode, such as the method of parameter initialization. This specification is provided as a dictionary with keys given by the arguments of the add_weight() method and values being the corresponding specification.
**kwargs – additional keyword arguments passed to the Layer base class
Example
First let’s define the QNode that we want to convert into a Keras Layer:
n_qubits = 2 dev = qml.device("default.qubit", wires=n_qubits) @qml.qnode(dev) def qnode(inputs, weights_0, weight_1): qml.RX(inputs[0], wires=0) qml.RX(inputs[1], wires=1) qml.Rot(*weights_0, wires=0) qml.RY(weight_1, wires=1) qml.CNOT(wires=[0, 1]) return qml.expval(qml.Z(0)), qml.expval(qml.Z(1))
The signature of the QNode must contain an
inputs
named argument for input data, with all other arguments to be treated as internal weights. We can then convert to a Keras Layer with:>>> weight_shapes = {"weights_0": 3, "weight_1": 1} >>> qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=2)
The internal weights of the QNode are automatically initialized within the
KerasLayer
and must have their shapes specified in aweight_shapes
dictionary. It is then easy to combine with other neural network layers from the tensorflow.keras.layers module and create a hybrid:>>> clayer = tf.keras.layers.Dense(2) >>> model = tf.keras.models.Sequential([qlayer, clayer])
Usage Details
QNode signature
The QNode must have a signature that satisfies the following conditions:
Contain an
inputs
named argument for input data.All other arguments must accept an array or tensor and are treated as internal weights of the QNode.
All other arguments must have no default value.
The
inputs
argument is permitted to have a default value provided the gradient with respect toinputs
is not required.There cannot be a variable number of positional or keyword arguments, e.g., no
*args
or**kwargs
present in the signature.
Output shape
If the QNode returns a single measurement, then the output of the
KerasLayer
will have shape(batch_dim, *measurement_shape)
, wheremeasurement_shape
is the output shape of the measurement:def print_output_shape(measurements): n_qubits = 2 dev = qml.device("default.qubit", wires=n_qubits, shots=100) @qml.qnode(dev) def qnode(inputs, weights): qml.templates.AngleEmbedding(inputs, wires=range(n_qubits)) qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits)) if len(measurements) == 1: return qml.apply(measurements[0]) return [qml.apply(m) for m in measurements] weight_shapes = {"weights": (3, n_qubits, 3)} qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=None) batch_dim = 5 x = tf.zeros((batch_dim, n_qubits)) return qlayer(x).shape
>>> print_output_shape([qml.expval(qml.Z(0))]) TensorShape([5]) >>> print_output_shape([qml.probs(wires=[0, 1])]) TensorShape([5, 4]) >>> print_output_shape([qml.sample(wires=[0, 1])]) TensorShape([5, 100, 2])
If the QNode returns multiple measurements, then the measurement results will be flattened and concatenated, resulting in an output of shape
(batch_dim, total_flattened_dim)
:>>> print_output_shape([qml.expval(qml.Z(0)), qml.probs(wires=[0, 1])]) TensorShape([5, 5]) >>> print_output_shape([qml.probs([0, 1]), qml.sample(wires=[0, 1])]) TensorShape([5, 204])
Initializing weights
The optional
weight_specs
argument ofKerasLayer
allows for a more fine-grained specification of the QNode weights, such as the method of initialization and any regularization or constraints. For example, the initialization method of theweights
argument in the example above could be specified by:weight_specs = {"weights": {"initializer": "random_uniform"}}
The values of
weight_specs
are dictionaries with keys given by arguments of the Keras add_weight() method. For the"initializer"
argument, one can specify a string such as"random_uniform"
or an instance of an Initializer class, such as tf.keras.initializers.RandomUniform.If
weight_specs
is not specified, weights will be added using the Keras default initialization and without any regularization or constraints.Model saving
The weights of models that contain
KerasLayers
can be saved using the usualtf.keras.Model.save_weights
method:clayer = tf.keras.layers.Dense(2, input_shape=(2,)) qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=2) model = tf.keras.Sequential([clayer, qlayer]) model.save_weights(SAVE_PATH)
To load the model weights, first instantiate the model like before, then call
tf.keras.Model.load_weights
:clayer = tf.keras.layers.Dense(2, input_shape=(2,)) qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=2) model = tf.keras.Sequential([clayer, qlayer]) model.load_weights(SAVE_PATH)
Models containing
KerasLayer
objects can also be saved directly usingtf.keras.Model.save
. This method also saves the model architecture, weights, and training configuration, including the optimizer state:clayer = tf.keras.layers.Dense(2, input_shape=(2,)) qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=2) model = tf.keras.Sequential([clayer, qlayer]) model.save(SAVE_PATH)
In this case, loading the model requires no knowledge of the original source code:
model = tf.keras.models.load_model(SAVE_PATH)
Note
Currently
KerasLayer
objects cannot be saved in theHDF5
file format. In order to save a model using the latter method above, theSavedModel
file format (default in TensorFlow 2.x) should be used.Additional example
The code block below shows how a circuit composed of templates from the Templates module can be combined with classical Dense layers to learn the two-dimensional moons dataset.
import pennylane as qml import tensorflow as tf import sklearn.datasets n_qubits = 2 dev = qml.device("default.qubit", wires=n_qubits) @qml.qnode(dev) def qnode(inputs, weights): qml.templates.AngleEmbedding(inputs, wires=range(n_qubits)) qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits)) return qml.expval(qml.Z(0)), qml.expval(qml.Z(1)) weight_shapes = {"weights": (3, n_qubits, 3)} qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=2) clayer1 = tf.keras.layers.Dense(2) clayer2 = tf.keras.layers.Dense(2, activation="softmax") model = tf.keras.models.Sequential([clayer1, qlayer, clayer2]) data = sklearn.datasets.make_moons() X = tf.constant(data[0]) Y = tf.one_hot(data[1], depth=2) opt = tf.keras.optimizers.SGD(learning_rate=0.5) model.compile(opt, loss='mae')
The model can be trained using:
>>> model.fit(X, Y, epochs=8, batch_size=5) Train on 100 samples Epoch 1/8 100/100 [==============================] - 9s 90ms/sample - loss: 0.3524 Epoch 2/8 100/100 [==============================] - 9s 87ms/sample - loss: 0.2441 Epoch 3/8 100/100 [==============================] - 9s 87ms/sample - loss: 0.1908 Epoch 4/8 100/100 [==============================] - 9s 87ms/sample - loss: 0.1832 Epoch 5/8 100/100 [==============================] - 9s 88ms/sample - loss: 0.1596 Epoch 6/8 100/100 [==============================] - 9s 87ms/sample - loss: 0.1637 Epoch 7/8 100/100 [==============================] - 9s 86ms/sample - loss: 0.1613 Epoch 8/8 100/100 [==============================] - 9s 87ms/sample - loss: 0.1474
Returning a state
If your QNode returns the state of the quantum circuit using
state()
ordensity_matrix()
, you must immediately follow your quantum Keras Layer with a layer that casts to reals. For example, you could use tf.keras.layers.Lambda with the functionlambda x: tf.abs(x)
. This casting is required because TensorFlow’s Keras layers require a real input and are differentiated with respect to real parameters.Attributes
compute_dtype
The dtype of the computations performed by the layer.
dtype
Alias of layer.variable_dtype.
dtype_policy
input
Retrieves the input tensor(s) of a symbolic operation.
Name of the argument to be used as the input to the Keras Layer.
input_dtype
The dtype layer inputs should be converted to.
input_spec
losses
List of scalar losses from add_loss, regularizers and sublayers.
metrics
List of all metrics.
metrics_variables
List of all metric variables.
non_trainable_variables
List of all non-trainable layer state.
non_trainable_weights
List of all non-trainable weight variables of the layer.
output
Retrieves the output tensor(s) of a layer.
path
The path of the layer.
quantization_mode
The quantization mode of this layer, None if not quantized.
supports_masking
Whether this layer supports computing a mask using compute_mask.
trainable
Settable boolean, whether this layer should be trainable or not.
trainable_variables
List of all trainable layer state.
trainable_weights
List of all trainable weight variables of the layer.
variable_dtype
The dtype of the state (weights) of the layer.
variables
List of all layer state, including random seeds.
weights
List of all weight variables of the layer.
Methods
add_loss
(loss)Can be called inside of the call() method to add a scalar loss.
add_metric
(*args, **kwargs)add_variable
(shape, initializer[, dtype, ...])Add a weight variable to the layer.
add_weight
([shape, initializer, dtype, ...])Add a weight variable to the layer.
build
(input_shape)Initializes the QNode weights.
build_from_config
(config)Builds the layer's states with the supplied config dict.
call
(inputs)Evaluates the QNode on input data using the initialized weights.
compute_mask
(inputs, previous_mask)compute_output_shape
(input_shape)Computes the output shape after passing data of shape
input_shape
through the QNode.compute_output_spec
(*args, **kwargs)construct
(args, kwargs)Constructs the wrapped QNode on input data using the initialized weights.
count_params
()Count the total number of scalars composing the weights.
from_config
(config)Creates an operation from its config.
get_build_config
()Returns a dictionary with the layer's input shape.
Get serialized layer configuration
get_weights
()Return the values of layer.weights as a list of NumPy arrays.
load_own_variables
(store)Loads the state of the layer.
quantize
(mode[, type_check])quantized_build
(input_shape, mode)quantized_call
(*args, **kwargs)rematerialized_call
(layer_call, *args, **kwargs)Enable rematerialization dynamically for layer's call method.
save_own_variables
(store)Saves the state of the layer.
set_input_argument
([input_name])Set the name of the input argument.
set_weights
(weights)Sets the values of layer.weights from a list of NumPy arrays.
stateless_call
(trainable_variables, ...[, ...])Call the layer without any side effects.
symbolic_call
(*args, **kwargs)- build(input_shape)[source]¶
Initializes the QNode weights.
- Parameters
input_shape (tuple or tf.TensorShape) – shape of input data; this is unused since the weight shapes are already known in the __init__ method.
- compute_output_shape(input_shape)[source]¶
Computes the output shape after passing data of shape
input_shape
through the QNode.- Parameters
input_shape (tuple or tf.TensorShape) – shape of input data
- Returns
shape of output data
- Return type
tf.TensorShape
- construct(args, kwargs)[source]¶
Constructs the wrapped QNode on input data using the initialized weights.
This method was added to match the QNode interface. The provided args must contain a single item, which is the input to the layer. The provided kwargs is unused.
- Parameters
args (tuple) – A tuple containing one entry that is the input to this layer
kwargs (dict) – Unused