NVIDIA-container-toolkit
Enables containerized workloads to access NVIDIA GPUs by integrating with container runtimes.
What is NVIDIA-container-toolkit?
The nvidia-container-toolkit
image enables containerized workloads to access NVIDIA GPUs by integrating with container runtimes like Docker and containerd. It provides the underlying runtime hooks and libraries needed to expose GPU resources inside containers, ensuring that CUDA applications can run with native performance.
This image is typically used in environments where GPU acceleration is required – such as AI/ML training, inference, scientific computing, and video processing. It works with the NVIDIA runtime, which can be configured to launch containers with GPU access on supported hosts.
While this image is rarely used standalone, it plays a critical role in enabling GPU support across Docker and Kubernetes-based platforms.
How to use this image
The nvidia-container-toolkit itself is typically installed on the host, not used as a standalone container. However, it enables the runtime necessary for GPU-enabled containers.
Install the NVIDIA runtime on the host:
<code>distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add - \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list</code>
<code>sudo apt-get updatesudo apt-get install -y nvidia-container-toolkit</code>
Configure Docker to use the NVIDIA runtime:
<code>sudo nvidia-ctk runtime configure --runtime=dockersudo systemctl restart docker</code>
Run a GPU-enabled container:
<code>docker run --rm --gpus all nvidia/cuda:12.3.1-base-ubuntu20.04 nvidia-smi</code>
Logging and behavior:
The toolkit operates at the container runtime level, so it doesn’t output logs directly but enables containers to access the GPU environment and drivers.
Image variants
There is no standalone container image for nvidia-container-toolkit
.
Instead, it is installed as a host-level package and interacts with the container runtime.
However, it supports use with GPU-enabled container images such as:
- nvidia/cuda:<tag> – Standard CUDA base images for running GPU-accelerated applications.
- nvidia/pytorch, nvidia/tensorflow – Framework-specific containers that require the runtime to function on GPU-enabled hosts.
The toolkit is available via Debian/Ubuntu/RHEL packages or from GitHub. No official Docker image is published under this name.