AWS Quantum Technologies Blog

How to run CUDA-Q programs on HAQM Braket notebook instances

HAQM Braket provides access to quantum computing resources and tools to develop quantum algorithms, test them on quantum circuit simulators, and run them on different quantum hardware technologies. HAQM Braket notebooks provide customers with a fully managed development environment that comes pre-installed with a range of and quantum development frameworks, including the HAQM Braket SDK .

In a previous post, we announced that customers can now develop hybrid workflows using the open-source NVIDIA CUDA-Q platform and HAQM Braket Hybrid Jobs. This enables customers to run on CUDA-Q’s simulators using powerful NVIDIA GPUs and on quantum hardware backends supported on HAQM Braket.

In this blog post, we show how you can develop and run CUDA-Q applications interactively in Braket notebooks by configuring a Jupyter kernel running a CUDA-Q Docker container with only a few lines of code.

CUDA-Q Docker images

One of the most convenient options to install CUDA-Q on a Braket notebook instance is to use the Docker images available on the NVIDIA NGC Container Registry. Docker containers allow you to develop applications in a controlled environment which does not depend on or interfere with the host system. Braket notebook instances come with Docker pre-installed. To get started, create a new Braket notebook instance, open the Jupyter lab environment, and open a new terminal session.

Figure 1 – The launcher tab (landing page) of a new HAQM Braket notebook instance. The green arrow points to the Terminal button. Clicking on it will open a new terminal session where you can pull the CUDA-Q Docker image.

Figure 1 – The launcher tab (landing page) of a new HAQM Braket notebook instance. The green arrow points to the Terminal button. Clicking on it will open a new terminal session where you can pull the CUDA-Q Docker image.

Jupyter Kernels

A Jupyter kernel is a programming language specific engine that executes the code contained in a Jupyter notebook. Braket notebooks by default use the kernel “conda_braket” which is backed by the Conda environment containing the Braket-provided tools and plugins. You can list all installed Jupyter kernels by running the following command in your terminal

$ jupyter kernelspec list

Configuring a Jupyter Kernel running a Docker container

Now we will introduce a solution that allows you to run your CUDA-Q programs in Braket notebooks using a kernel backed by a Docker container with a fully-featured CUDA-Q installation. For this example we choose the stable release CUDA-Q image “cu12-0.9.1” published on the NGC Container Registry. Next we have to extend this image and install the Python packages ipython and ipykernel, and the HAQM Braket SDK. To do so, we create a new directory and Dockerfile from the terminal on our notebook instance:

$ mkdir /home/ec2-user/cudaq-jupyter-kernel
$ touch /home/ec2-user/cudaq-jupyter-kernel/Dockerfile

Edit the Dockerfile with the following content:

FROM nvcr.io/nvidia/quantum/cuda-quantum:cu12-0.9.1
RUN pip install --upgrade pip ipython ipykernel amazon-braket-sdk
ENTRYPOINT []

Next, we build the image which will take about 3-5 minutes.

$ docker build -t cudaq-jupyter-kernel /home/ec2-user/cudaq-jupyter-kernel/

Then, we have to configure a Jupyter kernel that uses this image. For that, we create the following directory and file

$ mkdir /home/ec2-user/.local/share/jupyter/kernels/docker_cudaq
$ touch /home/ec2-user/.local/share/jupyter/kernels/docker_cudaq/kernel.json

To the file, “kernel.json”, we add the following JSON serialized dictionary containing the kernel’s name as it will be displayed in the UI and a list of command line arguments used to start the kernel, including the tag (“cudaq-jupyter-kernel”) of the Docker image we’ve built in the previous step.

{
 "argv": [
  "/usr/bin/docker",
  "run",
  "--network=host",
  "-v",
  "{connection_file}:/connection-spec",
  "--mount",
  "type=bind,source=/home/ec2-user/amazon-braket-examples/examples,target=/home/cudaq/braket_examples",
  "cudaq-jupyter-kernel",
  "python",
  "-m",
  "ipykernel_launcher",
  "-f",
  "/connection-spec"
 ],
 "display_name": "docker_cudaq",
 "language": "python"
}

Note we have added an example showing how you can mount a directory on the file system of the Braket notebook instance into the CUDA-Q container. With the lines “–mount” and “type=bind,source=/home/ec2-user/amazon-braket-examples/examples,target=/home/cudaq/braket_examples” the Braket examples are available in the container home directory. This way, you can read or write files in the source directory from within your CUDA-Q programs.

When you create a Braket notebook instance, you can choose from a selection of instance types comprising varying combinations of compute capacity and memory (see step 2 in this guide). Some instance types are GPU accelerated: “ml.p3.2xlarge” with 1 NVIDIA V100 GPU, “ml.p3.8xlarge” with 4 V100 GPUs, and “ml.p3.16xlarge” with 8 V100 GPUs (see the HAQM SageMaker AI pricing page for more details about these instances). To access these GPUs in your CUDA-Q program, you will need to add “–gpus=all” to kernel.json as shown below

{
 "argv": [
  "/usr/bin/docker",
  "run",
  "--network=host",
  "--gpus=all",
  "-v",
  "{connection_file}:/connection-spec",
  "--mount",
  "type=bind,source=/home/ec2-user/amazon-braket-examples/examples,target=/home/cudaq/braket_examples",
  "cudaq-jupyter-kernel",
  "python",
  "-m",
  "ipykernel_launcher",
  "-f",
  "/connection-spec"
 ],
 "display_name": "docker_cudaq",
 "language": "python"
}

Done! Now, when you list all installed Jupyter kernels you should see the “docker_cudaq” kernel.

$ jupyter kernelspec list

Available kernels:
conda_braket     /home/ec2-user/.local/share/jupyter/kernels/conda_braket
docker_cudaq     /home/ec2-user/.local/share/jupyter/kernels/docker_cudaq

Verify the CUDA-Q Jupyter kernel

At this point you will see a tile named “docker_cudaq” in the launcher tab, see screenshot below. Click on it to create a new notebook with the “docker_cudaq” kernel.

Figure 2 – The launcher tab (landing page) of an HAQM Braket notebook instance with a docker_cudaq kernel. The green arrow points to the docker_cudaq notebook button. Clicking on it will open a notebook that allows you to execute CUDA-Q programs.

Figure 2 – The launcher tab (landing page) of an HAQM Braket notebook instance with a docker_cudaq kernel. The green arrow points to the docker_cudaq notebook button. Clicking on it will open a notebook that allows you to execute CUDA-Q programs.

If you ever need additional Python modules, or newer versions of CUDA-Q are released, all you need to do is modify the Dockerfile and rebuild the Docker image (by running the ‘docker build’ command again). The Jupyter kernel specification need not to be changed.

We’ve run a few examples displayed in the screenshot below. Note, that we executed CUDA-Q kernels on a Braket device without having to set AWS credentials.

Figure 3 – Example CUDA-Q commands and results that verify the notebook is operating as expected.

Figure 3 – Example CUDA-Q commands and results that verify the notebook is operating as expected.

A number of CUDA-Q example programs are available in the CUDA-Q documentation and the Braket example repository. Give them a try.

Summary

In this blog post, we showed how you can run CUDA-Q code interactively in Jupyter notebooks on HAQM Braket notebook instances. We achieved this by configuring a Jupyter kernel backed by a Docker container running a fully-featured installation of CUDA-Q. All it took was three simple steps and a few lines of code. Now CUDA-Q developers can use HAQM Braket notebook instances as their cloud-based quantum development environment.