qml.kernels

Overview

This subpackage defines functions that relate to quantum kernel methods. On one hand this includes functions to call a quantum kernel systematically on training and test datasets to obtain the kernel matrix. On the other hand it provides postprocessing methods for those kernel matrices which can be used to mitigate device noise and sampling errors.

Functions

closest_psd_matrix(K[, fix_diagonal, solver])

Return the closest positive semi-definite matrix to the given kernel matrix.

displace_matrix(K)

Remove negative eigenvalues from the given kernel matrix by adding a multiple of the identity matrix.

flip_matrix(K)

Remove negative eigenvalues from the given kernel matrix by taking the absolute value.

kernel_matrix(X1, X2, kernel)

Computes the matrix of pairwise kernel values for two given datasets.

mitigate_depolarizing_noise(K, num_wires, method)

Estimate depolarizing noise rate(s) using on the diagonal entries of a kernel matrix and mitigate the noise, assuming a global depolarizing noise model.

polarity(X, Y, kernel[, ...])

Polarity of a given kernel function.

square_kernel_matrix(X, kernel[, ...])

Computes the square matrix of pairwise kernel values for a given dataset.

target_alignment(X, Y, kernel[, ...])

Target alignment of a given kernel function.

threshold_matrix(K)

Remove negative eigenvalues from the given kernel matrix.

Description

Given a kernel

\[k: \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}, \quad (x_1, x_2)\mapsto k(x_1, x_2)\]

the kernel matrix of \(k\) on a training dataset \(\{(x_1, y_1),\cdots (x_n, y_n)\}\) with \(x_i\in\mathbb{R}^d\) and \(y_i\in\{-1, 1\}\) is defined as

\[K_{ij} = k(x_i, x_j).\]

For valid kernels, this is a real symmetric positive semi-definite matrix. We also define the ideal kernel matrix for the training dataset which perfectly predicts whether two points have identical labels or not:

\[K^\ast_{ij} = y_i y_j\]

We can measure the similarity between \(K\) and \(K^\ast\), through the kernel polarity which can be expressed as the Frobenius inner product between the two matrices:

\[\operatorname{P}(k) = \langle K^\ast, K \rangle_F = \sum_{i,j=1}^n y_i y_j k(x_i, x_j)\]

Additionally, there is the kernel-target alignment, which is the normalized counterpart to the kernel polarity:

\[\begin{split} \operatorname{TA}(k) &= \frac{P(k)}{\lVert K^\ast \rVert_F\;\lVert K \rVert_F}\\ \lVert K\rVert_F &= \sqrt{\sum_{i,j=1}^n k(x_i, x_j)^2}\\ \lVert K^\ast\rVert_F &= \sqrt{\sum_{i,j=1}^n (y_iy_j)^2}\end{split}\]

For datasets with different numbers of training points per class the labels are rescaled by the number of datapoints in the respective class to avoid that kernel polarity and kernel-target alignment are dominated by the properties of the kernel for just a single class.

Given a callable kernel function, all these quantities can readily be computed using the methods in this module.