Installation

Catalyst is officially supported on Linux (x86_64, aarch64) and macOS (arm64, x86_64) platforms, and pre-built binaries are being distributed via the Python Package Index (PyPI) for Python versions 3.9 and higher. To install it, simply run the following pip command:

pip install pennylane-catalyst

Warning

macOS does not ship with a system compiler by default, which Catalyst depends on. Please ensure that XCode or the XCode Command Line Tools are installed on your system before using Catalyst.

The easiest method of installation is to run xcode-select --install from the Terminal app.

Pre-built packages for Windows are not yet available, and compatibility with other platforms is untested and cannot be guaranteed. If you are using one of these platforms, please try out our Docker and Dev Container images described in the next section.

If you wish to contribute to Catalyst or develop against our runtime or compiler, instructions for building from source are also included further down.

Dev Containers

Try out Catalyst in self-contained, ready-to-go environments called Dev Containers:

Try Catalyst in Dev Container
You will need an existing installation of Docker, VS Code, and the VS Code Dev Containers extension.

If desired, the Docker images can also be used in a standalone fashion:

The user image provides an officially supported environment and automatically installs the latest release of Catalyst. The developer image only provides the right environment to build Catalyst from source, and requires launching the post-install script at .devcontainer/dev/post-install.sh from within the root of the running container.

Note

Due to a bug in the Dev Containers extension, clicking on the “Launch” badge will not prompt for a choice between the User and Dev containers. Instead, the User container is automatically chosen.

As a workaround, you can clone the Catalyst repository first, open it as a VS Code Workspace, and then reopen the Workspace in a Dev Container via the Reopen in Container command.

Minimal Building From Source Guide

Most developers might want to build Catalyst from source instead of using a pre-shipped package. In this section we present a minimal building-from-source installation guide.

The next section provides a more detailed guide, which we strongly recommend the user to read through. Importantly, each component of Catalyst, namely the Python frontend, the MLIR compiler, and the runtime library, can be built and tested indenpendently, which this minimal installation guide does not go over.

The essential steps are:

Warning

If using Anaconda or Miniconda, please make sure to upgrade libstdcxx-ng via:
conda install -c conda-forge libstdcxx-ng
If not, you may receive 'GLIBCXX_3.4.x' not found error when running make test.
# Install common requirements
sudo apt install clang lld ccache libomp-dev ninja-build make cmake

# Clone the Catalyst repository
git clone --recurse-submodules --shallow-submodules https://github.com/PennyLaneAI/catalyst.git

# Install specific requirements for Catalyst
cd catalyst
pip install -r requirements.txt

# Build Catalyst
make all

# Test that everything is built properly
make test

These steps should give you the full functionality of Catalyst.

Detailed Building From Source Guide

Note

This section is a detailed building-from-source guide. Some commands in this section has already been included in the minimal guide.

To build Catalyst from source, developers should follow the instructions provided below for building all three modules: the Python frontend, the MLIR compiler, and the runtime library.

Requirements

In order to build Catalyst from source, developers need to ensure the following pre-requisites are installed and available on the path (depending on the platform):

  • The clang compiler, LLD linker (Linux only), CCache compiler cache (optional, recommended), and OpenMP.

  • The Ninja, Make, and CMake (v3.20 or greater) build tools.

  • Python 3.9 or higher for the Python frontend.

  • The Python package manager pip must be version 22.3 or higher.

They can be installed via:

sudo apt install clang lld ccache libomp-dev ninja-build make cmake

Note

If the CMake version available in your system is too old, you can also install up-to-date versions of it via pip install cmake.

Warning

If using Anaconda or Miniconda, please make sure to upgrade libstdcxx-ng:

conda install -c conda-forge libstdcxx-ng

If not, you may receive the following error when running make test because the conda environment is using old versions of libstdcxx-ng.

'GLIBCXX_3.4.x' not found

Once the pre-requisites are installed, start by cloning the project repository including all its submodules:

git clone --recurse-submodules --shallow-submodules https://github.com/PennyLaneAI/catalyst.git

For an existing copy of the repository without its submodules, they can also be fetched via:

git submodule update --init --depth=1

All additional build and developer dependencies are managed via the repository’s requirements.txt and can be installed as follows once the repository is cloned:

pip install -r requirements.txt

Note

Please ensure that your local site-packages for Python are available on the PATH - watch out for the corresponding warning that pip may give you during installation.

Catalyst

The build process for Catalyst is managed via a series of Makefiles for each component. To build the entire project from start to finish simply run the following make target from the top level directory:

make all

To build each component one by one starting from the runtime, or to build additional backend devices beyond lightning.qubit, please follow the instructions below.

Runtime

By default, the runtime builds and installs all supported backend devices, enabling the execution of quantum circuits on local simulators and remote services, such as Amazon Braket. The PennyLane-Lightning suite devices require C++20 standard library features. Older C++ compilers may not support this, so it is recommended to use a modern compiler with these features.

The full list of supported backends, and additional configuration options, are available in the Catalyst Runtime page.

From the root project directory, the runtime can then be built as follows:

make runtime

MLIR Dialects

To build the Catalyst MLIR component, along with the necessary core MLIR and MLIR-HLO dependencies, run:

make mlir

You can also choose to build the custom Catalyst dialects only, with:

make dialects

Frontend

To install the pennylane-catalyst Python package (the compiler frontend) in editable mode:

make frontend

Variables

After following the instructions above, no configuration of environment variables should be required. However, if you are building Catalyst components in custom locations, you may need to set and update a few variables on your system by adjusting the paths in the commands below accordingly.

To make the MLIR bindings from the Catalyst dialects discoverable to the compiler:

export PYTHONPATH="$PWD/mlir/build/python_packages/quantum:$PYTHONPATH"

To make runtime libraries discoverable to the compiler:

export RUNTIME_LIB_DIR="$PWD/runtime/build/lib"

To make MLIR libraries discoverable to the compiler:

export MLIR_LIB_DIR="$PWD/mlir/llvm-project/build/lib"

To make Enzyme libraries discoverable to the compiler:

export ENZYME_LIB_DIR="$PWD/mlir/Enzyme/build/Enzyme"

To make required tools in llvm-project/build, mlir-hlo/mhlo-build, and mlir/build discoverable to the compiler:

export PATH="$PWD/mlir/llvm-project/build/bin:$PWD/mlir/mlir-hlo/mhlo-build/bin:$PWD/mlir/build/bin:$PATH"

Tests

The following target runs all available test suites with the default execution device in Catalyst:

make test

You can also test each module separately by using running the test-frontend, test-dialects, and test-runtime targets instead. Jupyter Notebook demos are also testable via test-demos.

Additional Device Backends

The runtime tests can be run on additional devices via the same flags that were used to build them, but using the test-runtime target instead:

make test-runtime ENABLE_OPENQASM=ON

Note

The test-runtime targets rebuilds the runtime with the specified flags. Therefore, running make runtime OPENQASM=ON and make test-runtime in succession will leave you without the OpenQASM device installed. In case of errors it can also help to delete the build directory.

The Python test suite is also set up to run with different device backends. Assuming the respective device is available & compatible, they can be tested individually by specifying the PennyLane plugin device name in the test command:

make pytest TEST_BACKEND="lightning.kokkos"

AWS Braket devices have their own set of tests, which can be run either locally (LOCAL) or on the AWS Braket service (REMOTE) as follows:

make pytest TEST_BRAKET=LOCAL

Documentation

To build and test documentation for Catalyst, you will need to install sphinx and other packages listed in doc/requirements.txt:

pip install -r doc/requirements.txt

Additionally, doxygen is required to build C++ documentation, and pandoc to render Jupyter Notebooks.

They can be installed via

sudo apt install doxygen pandoc

To generate html files for the documentation for Catalyst:

make docs

The generated files are located in doc/_build/html

Install a Frontend-Only Development Environment from TestPyPI Wheels

It is possible to work on the source code repository and test the changes without having to compile Catalyst. This is ideal for situations where the changes do not target the runtime or the MLIR infrastructure, and only concern the frontend. It basically makes use of the shared libraries already shipped with the TestPyPI Catalyst wheels.

Essential Steps

To activate the development environment, open a terminal and issue the following commands:

# Clone the Catalyst repository without submodules, as they are not needed for frontend
# development
git clone [email protected]:PennyLaneAI/catalyst.git

# Setup the development environment based on the latest TestPyPI wheels.
# Please provide a path for the Python virtual environment
cd catalyst
bash ./setup_dev_from_wheel.sh /path/to/virtual/env

# Activate the Python virtual environment
source /path/to/virtual/env/bin/activate

To exit the Python virtual environment, type:

deactivate

Special Considerations

Catalyst dev wheels are tied to fixed versions of PennyLane and Lightning, which are installed together as a bundle. If you want to use different versions of Pennylane or Lightning, reinstall the desired versions after having run the script:

python -m pip install pennylane==0.*.*
python -m pip install pennylane-lightning==0.*.*

If you require the Catalyst repository with all its submodules, clone it this way:

git clone --recurse-submodules --shallow-submodules [email protected]:PennyLaneAI/catalyst.git

How Does it Work?

The provided script first creates and activates a Python virtual environment, so the system Python configurations do not get affected, nor other virtual environments.

In a second step, it obtains the latest Catalyst wheel from the TestPyPI server and creates hard links from the wheel code to the frontend code of the repository, in order to allow working directly with the frontend code of the repository and at the same time test the changes while using the installed Catalyst wheel libraries, hence avoiding compilation.

Further Steps

If everything goes well, git status should not report any changed files.

Before making changes to the frontend, make sure you create a new branch:

git checkout -b new-branch-name

Once in the new branch, make the wanted changes. Use the IDE of your preference.

You can test the changes by executing your sample code under the same virtual environment you used with the scripts. As files in the repository are hard-linked to the Wheel code, you are actually changing the code stored at the Python site-packages folder as well, and you will be automatically using the shared libraries provided by the Python wheels. Again, there is no need to compile Catalyst from source.

You can commit your changes as usual. Once ready, push the new branch to the remote repository:

git push -u origin new-branch-name

Now you can go to GitHub and issue a Pull Request based on the new branch.