236

I'm searching for a way to use the GPU from inside a docker container.

The container will execute arbitrary code so i don't want to use the privileged mode.

Any tips?

From previous research i understood that run -v and/or LXC cgroup was the way to go but i'm not sure how to pull that off exactly

Antoine
  • 13,494
  • 6
  • 40
  • 52
Regan
  • 8,231
  • 5
  • 23
  • 23
  • See http://stackoverflow.com/questions/17792161/is-it-possible-to-expose-a-usb-device-to-a-lxc-docker-container which is similar to your need. – Nicolas Goy Aug 14 '14 at 01:41
  • 1
    @NicolasGoy The link was good but not that useful since i can't use privileged for security reason. The lxc-cgroups was a good pointer, but not enough. I found a way, and i will self answer when everything will be polished. – Regan Aug 18 '14 at 10:00

10 Answers10

158

Writing an updated answer since most of the already present answers are obsolete as of now.

Versions earlier than Docker 19.03 used to require nvidia-docker2 and the --runtime=nvidia flag.

Since Docker 19.03, you need to install nvidia-container-toolkit package and then use the --gpus all flag.

So, here are the basics,

Package Installation

Install the nvidia-container-toolkit package as per official documentation at Github.

For Redhat based OSes, execute the following set of commands:

$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo

$ sudo yum install -y nvidia-container-toolkit
$ sudo systemctl restart docker

For Debian based OSes, execute the following set of commands:

# Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker

Running the docker with GPU support

docker run --name my_all_gpu_container --gpus all -t nvidia/cuda

Please note, the flag --gpus all is used to assign all available gpus to the docker container.

To assign specific gpu to the docker container (in case of multiple GPUs available in your machine)

docker run --name my_first_gpu_container --gpus device=0 nvidia/cuda

Or

docker run --name my_first_gpu_container --gpus '"device=0"' nvidia/cuda
Rohit Lal
  • 2,791
  • 1
  • 20
  • 36
  • 19
    As of 2019 this is the right way of using GPU from within docker containers. – Timur Bakeyev Oct 20 '19 at 22:02
  • 4
    Has anyone ever tried this from inside a Batch job on AWS? – medley56 May 05 '20 at 21:54
  • 1
    I believe this is most relevant. Wish I had found it sooner, though I had to adapt the instructions from https://github.com/NVIDIA/nvidia-docker to work with Ubuntu 20.04 – VictorLegros May 13 '20 at 03:33
  • @TimurBakeyev yet we still can't run ubuntu container on windows host machine? – ERJAN Aug 26 '20 at 15:00
  • @TimurBakeyev, could u help with similar question ? https://stackoverflow.com/questions/63600436/nvidia-driver-support-on-ubuntu-in-docker-from-host-windows-found-no-nvidia-d – ERJAN Aug 26 '20 at 15:02
  • 2
    The official NVIDIA instructions https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker say to install nvidia-docker2. Should they be updated? – WurmD Sep 28 '20 at 11:48
  • Yeah, @WurmD is right. Thanks a lot for your guide Rohit, but please explain the discrepancy in package installation between your answer (`nvidia-container-toolkit`) and the [**official documentation**](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) (`nvidia-docker2`). – Atralb Dec 13 '20 at 23:36
  • @Atralb I can explain it after some investigations: In the commit 88a2fd contributer dualvtable moved the installation guide to the wiki, but the original Readme text includes the container-toolkit. I already created an Issue:https://github.com/NVIDIA/nvidia-docker/issues/1474 – MaKaNu Mar 17 '21 at 17:42
  • Docker fails to share GPU with `--gpus all` if it is not NVIDIA. – mviereck Jul 11 '22 at 09:51
  • If `sudo systemctl restart docker` fails, try `sudo service docker restart`. – b00t Sep 01 '22 at 08:45
  • official documentation tells before restarting docker : Configure the Docker daemon to recognize the NVIDIA Container Runtime: `nvidia-ctk runtime configure --runtime=docker` – Gorkem Feb 28 '23 at 22:47
  • Is there any possible way to NOT use ```--gpus all``` to specify this but set some ENV variable in the dockerfile ? I a use case where it's hard to pass docker arguments. – momo668 Jul 21 '23 at 05:43
140

Regan's answer is great, but it's a bit out of date, since the correct way to do this is avoid the lxc execution context as Docker has dropped LXC as the default execution context as of docker 0.9.

Instead it's better to tell docker about the nvidia devices via the --device flag, and just use the native execution context rather than lxc.

Environment

These instructions were tested on the following environment:

  • Ubuntu 14.04
  • CUDA 6.5
  • AWS GPU instance.

Install nvidia driver and cuda on your host

See CUDA 6.5 on AWS GPU Instance Running Ubuntu 14.04 to get your host machine setup.

Install Docker

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update && sudo apt-get install lxc-docker

Find your nvidia devices

ls -la /dev | grep nvidia

crw-rw-rw-  1 root root    195,   0 Oct 25 19:37 nvidia0 
crw-rw-rw-  1 root root    195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw-  1 root root    251,   0 Oct 25 19:37 nvidia-uvm

Run Docker container with nvidia driver pre-installed

I've created a docker image that has the cuda drivers pre-installed. The dockerfile is available on dockerhub if you want to know how this image was built.

You'll want to customize this command to match your nvidia devices. Here's what worked for me:

 $ sudo docker run -ti --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm tleyden5iwx/ubuntu-cuda /bin/bash

Verify CUDA is correctly installed

This should be run from inside the docker container you just launched.

Install CUDA samples:

$ cd /opt/nvidia_installers
$ ./cuda-samples-linux-6.5.14-18745345.run -noprompt -cudaprefix=/usr/local/cuda-6.5/

Build deviceQuery sample:

$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery   

If everything worked, you should see the following output:

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs =    1, Device0 = GRID K520
Result = PASS
tleyden
  • 1,942
  • 2
  • 12
  • 17
  • 6
    Why do you install lxc-docker if you don't need lxc then? – MP0 Nov 26 '14 at 23:19
  • I agree it's confusing. According to the [Docker documentation](https://docs.docker.com/installation/ubuntulinux/#ubuntu-trusty-1404-lts-64-bit) the recommended way to install the latest version of Docker is to install lxc-docker. But there is no need to pass `-e lxc` when starting Docker, so it's using the default (libcontainer?). Maybe they just haven't gotten around to renaming the package from lxc-docker to something else. – tleyden Dec 01 '14 at 22:23
  • 7
    I have CUDA 5.5 on the host and CUDA 6.5 in a container created from your image. CUDA is working on the host, and I passed the devices to the container. The container sees the GPUs through `ls -la /dev | grep nvidia` but CUDA can't find any CUDA-capable device: `./deviceQuery ` `./deviceQuery Starting...` `CUDA Device Query (Runtime API) version (CUDART static linking)` `cudaGetDeviceCount returned 38` `-> no CUDA-capable device is detected` `Result = FAIL` Is it because of the mismatch of the CUDA libs on the host and in the container? – brunetto Dec 17 '14 at 17:27
  • 1
    I don't know, you might want to ask on the nvidia forums. Assuming the version mismatch is a problem, you could take this [Dockerfile](https://registry.hub.docker.com/u/tleyden5iwx/ubuntu-cuda/dockerfile/) and edit it to have the CUDA 5.5 drivers, then rebuild a new docker image from it and use that. – tleyden Dec 18 '14 at 18:51
  • when you passthrough the host GPU to the docker process, do you need another graphical card for your Host? I know you need 2 video cards to make VM passthrough work, one for host, one for VM. Is it true for docker as well? – Xianlin Jun 15 '15 at 05:24
  • @Xianlin sounds like you are misunderstanding docker containers. With containers, the processes in a container run on _the same kernel_ as the host OS, and so they can access kernel modules / drivers exactly the same way as they would on the host, with no translation / passthrough middle layer required. – tleyden Jun 16 '15 at 15:27
  • With nvidia-docker you can use NV_GPU variable to select GPU and it will add for you device arguments https://github.com/NVIDIA/nvidia-docker/wiki/GPU-isolation#nvidia-docker – pplonski Sep 06 '16 at 13:07
  • 4
    Can you explain why image need to install nvidia driver? I thought only host installing nvidia driver (and use --device ...) is sufficient? – Helin Wang Mar 17 '17 at 01:43
  • 3
    Currently there is no way of doing this if you have Windows as the host. – Souradeep Nanda Jul 11 '18 at 03:19
  • use [`nvidia-docker2`](https://github.com/NVIDIA/k8s-device-plugin/issues/168#issuecomment-625981223) maybe better – 7oud Nov 25 '20 at 06:52
  • Please do not use this answer anymore as your solution. It's completely out of date. – hookenz Jan 16 '23 at 19:24
44

Ok i finally managed to do it without using the --privileged mode.

I'm running on ubuntu server 14.04 and i'm using the latest cuda (6.0.37 for linux 13.04 64 bits).


Preparation

Install nvidia driver and cuda on your host. (it can be a little tricky so i will suggest you follow this guide https://askubuntu.com/questions/451672/installing-and-testing-cuda-in-ubuntu-14-04)

ATTENTION : It's really important that you keep the files you used for the host cuda installation


Get the Docker Daemon to run using lxc

We need to run docker daemon using lxc driver to be able to modify the configuration and give the container access to the device.

One time utilization :

sudo service docker stop
sudo docker -d -e lxc

Permanent configuration Modify your docker configuration file located in /etc/default/docker Change the line DOCKER_OPTS by adding '-e lxc' Here is my line after modification

DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -e lxc"

Then restart the daemon using

sudo service docker restart

How to check if the daemon effectively use lxc driver ?

docker info

The Execution Driver line should look like that :

Execution Driver: lxc-1.0.5

Build your image with the NVIDIA and CUDA driver.

Here is a basic Dockerfile to build a CUDA compatible image.

FROM ubuntu:14.04
MAINTAINER Regan <http://stackoverflow.com/questions/25185405/using-gpu-from-a-docker-container>

RUN apt-get update && apt-get install -y build-essential
RUN apt-get --purge remove -y nvidia*

ADD ./Downloads/nvidia_installers /tmp/nvidia                             > Get the install files you used to install CUDA and the NVIDIA drivers on your host
RUN /tmp/nvidia/NVIDIA-Linux-x86_64-331.62.run -s -N --no-kernel-module   > Install the driver.
RUN rm -rf /tmp/selfgz7                                                   > For some reason the driver installer left temp files when used during a docker build (i don't have any explanation why) and the CUDA installer will fail if there still there so we delete them.
RUN /tmp/nvidia/cuda-linux64-rel-6.0.37-18176142.run -noprompt            > CUDA driver installer.
RUN /tmp/nvidia/cuda-samples-linux-6.0.37-18176142.run -noprompt -cudaprefix=/usr/local/cuda-6.0   > CUDA samples comment if you don't want them.
RUN export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64         > Add CUDA library into your PATH
RUN touch /etc/ld.so.conf.d/cuda.conf                                     > Update the ld.so.conf.d directory
RUN rm -rf /temp/*  > Delete installer files.

Run your image.

First you need to identify your the major number associated with your device. Easiest way is to do the following command :

ls -la /dev | grep nvidia

If the result is blank, use launching one of the samples on the host should do the trick. The result should look like that enter image description here As you can see there is a set of 2 numbers between the group and the date. These 2 numbers are called major and minor numbers (wrote in that order) and design a device. We will just use the major numbers for convenience.

Why do we activated lxc driver? To use the lxc conf option that allow us to permit our container to access those devices. The option is : (i recommend using * for the minor number cause it reduce the length of the run command)

--lxc-conf='lxc.cgroup.devices.allow = c [major number]:[minor number or *] rwm'

So if i want to launch a container (Supposing your image name is cuda).

docker run -ti --lxc-conf='lxc.cgroup.devices.allow = c 195:* rwm' --lxc-conf='lxc.cgroup.devices.allow = c 243:* rwm' cuda
Community
  • 1
  • 1
Regan
  • 8,231
  • 5
  • 23
  • 23
  • Can you share the container? – Chillar Anand Sep 18 '14 at 14:37
  • 1
    Docker has a `--device` option to allow container to access host's device. However I tried to use `--device=/dev/nvidia0` to allow docker container to run cuda and failed. – shiquanwang Oct 09 '14 at 08:06
  • 4
    I then succeeded with exposing all `/dev/nvidiao`, `/dev/nvidia1`, `/dev/nvidiactl` and `/dev/nvidia-uvm` with `--device`. Though don't know why. – shiquanwang Oct 09 '14 at 15:22
  • The --device option wasn't implemented when I had to find this solution. You need at least nvidia0 or nvidia1 (graphic card) and nvidiactl (general nvidia device) and nvidia-uvm (United memory device). – Regan Oct 10 '14 at 23:16
  • 2
    Thanks for your hints on the `/dev/nvidia*` @Regan. For @ChillarAnand I have made a [cuda-docker](https://registry.hub.docker.com/u/shiquanwang/docker-cuda/) – shiquanwang Oct 13 '14 at 06:20
  • I got it working with CUDA 6.5 too. I wrote up a blog article: [Docker on AWS GPU Ubuntu 14.04 / CUDA 6.5](http://tleyden.github.io/blog/2014/10/25/docker-on-aws-gpu-ubuntu-14-dot-04-slash-cuda-6-dot-5/) with the exact steps I followed. – tleyden Oct 25 '14 at 22:15
  • Full instructions on using --device option in my answer below. – tleyden Oct 26 '14 at 00:39
  • 1
    Might be an idea to point update this answer. Its no longer recommended to do it this way – hookenz Sep 15 '20 at 02:30
34

We just released an experimental GitHub repository which should ease the process of using NVIDIA GPUs inside Docker containers.

3XX0
  • 1,315
  • 1
  • 13
  • 25
  • 4
    Is there windows support? It doesn't seem to be, but perhaps I'm missing something. – Blaze Nov 14 '15 at 22:36
  • 7
    There is no Windows support. Running CUDA container requires Nvidia drivers for Linux and access to Linux devices representing GPU, e.g. /dev/nvidia0. These devices and drivers are not available when Docker is installed on Windows and running inside VirtualBox virtual machine. – Paweł Bylica Feb 03 '16 at 12:29
  • Still need the --device declarations in the run command? I've built a container FROM nvidia/cuda and the container runs fine, but the app (Wowza) isn't recognizing the GPUs while it does just fine when run directly on the host (this host, so I know drivers are fine). I'm running 361.28. The host is EC2 using the NVidia AMI on g2.8xlarge. – rainabba Feb 27 '16 at 13:45
  • No everything is taken care of by nvidia-docker, you should be able to run nvidia-smi inside the container and see your devices – 3XX0 Feb 28 '16 at 03:14
28

Recent enhancements by NVIDIA have produced a much more robust way to do this.

Essentially they have found a way to avoid the need to install the CUDA/GPU driver inside the containers and have it match the host kernel module.

Instead, drivers are on the host and the containers don't need them. It requires a modified docker-cli right now.

This is great, because now containers are much more portable.

enter image description here

A quick test on Ubuntu:

# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb

# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi

For more details see: GPU-Enabled Docker Container and: https://github.com/NVIDIA/nvidia-docker

hookenz
  • 36,432
  • 45
  • 177
  • 286
  • 1
    This works well once you get all the steps. Nvidia doesn't provide it all in one place, but [this example](http://www.morethantechnical.com/2018/01/27/an-automatic-tensorflow-cuda-docker-jupyter-machine-on-google-cloud-platform/) gives everything you need to make it work with a common use case. – KobeJohn Feb 22 '18 at 10:28
  • @KobeJohn - I just followed the installation instructions, the how to use command line and make sure my containers inherit from the cuda ones. It just works for me. – hookenz Feb 22 '18 at 19:00
  • 2
    Actually, can you give the real-life scenarios where use of nvidia-docker makes sense? – Suncatcher May 06 '18 at 15:23
  • @Suncatcher - I'm using it in a cluster that requires access to the GPU for 3D rendering. Dockerizing the apps made things simpler to deploy and maintain. – hookenz May 06 '18 at 21:42
17

Updated for cuda-8.0 on ubuntu 16.04

Dockerfile

FROM ubuntu:16.04
MAINTAINER Jonathan Kosgei <jonathan@saharacluster.com>

# A docker container with the Nvidia kernel module and CUDA drivers installed

ENV CUDA_RUN https://developer.nvidia.com/compute/cuda/8.0/prod/local_installers/cuda_8.0.44_linux-run

RUN apt-get update && apt-get install -q -y \
  wget \
  module-init-tools \
  build-essential 

RUN cd /opt && \
  wget $CUDA_RUN && \
  chmod +x cuda_8.0.44_linux-run && \
  mkdir nvidia_installers && \
  ./cuda_8.0.44_linux-run -extract=`pwd`/nvidia_installers && \
  cd nvidia_installers && \
  ./NVIDIA-Linux-x86_64-367.48.run -s -N --no-kernel-module

RUN cd /opt/nvidia_installers && \
  ./cuda-linux64-rel-8.0.44-21122537.run -noprompt

# Ensure the CUDA libs and binaries are in the correct environment variables
ENV LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64
ENV PATH=$PATH:/usr/local/cuda-8.0/bin

RUN cd /opt/nvidia_installers &&\
    ./cuda-samples-linux-8.0.44-21122537.run -noprompt -cudaprefix=/usr/local/cuda-8.0 &&\
    cd /usr/local/cuda/samples/1_Utilities/deviceQuery &&\ 
    make

WORKDIR /usr/local/cuda/samples/1_Utilities/deviceQuery
  1. Run your container

sudo docker run -ti --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm <built-image> ./deviceQuery

You should see output similar to:

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GRID K520 Result = PASS

Jonathan
  • 10,792
  • 5
  • 65
  • 85
  • 3
    I get following output. cudaGetDeviceCount returned 38 -> no CUDA-capable device is detected Result = FAIL – Soichi Hayashi Apr 30 '17 at 13:03
  • Late reply, but it means you probably don't have a GPU on that machine – Jonathan Aug 15 '17 at 16:34
  • Would a Cuda-9 version be nearly same as this? – huseyin tugrul buyukisik Nov 25 '17 at 15:26
  • @huseyintugrulbuyukisik see this answer on askubuntu https://askubuntu.com/questions/967332/how-can-i-install-cuda-9-on-ubuntu-17-10, I'd say you could use this answer as a guide but I haven't worked with cuda 9 to confirm that the same steps would apply – Jonathan Nov 25 '17 at 15:30
  • Don't do it this way. This is the old way. Use the new way. See link to my answer. This method is fraught with problems. – hookenz Nov 29 '17 at 17:18
14

Goal:

My goal was to make a CUDA enabled docker image without using nvidia/cuda as base image. Because I have some custom jupyter image, and I want to base from that.

Prerequisite:

The host machine had nvidia driver, CUDA toolkit, and nvidia-container-toolkit already installed. Please refer to the official docs, and to Rohit's answer.

Test that nvidia driver and CUDA toolkit is installed correctly with: nvidia-smi on the host machine, which should display correct "Driver Version" and "CUDA Version" and shows GPUs info.

Test that nvidia-container-toolkit is installed correctly with: docker run --rm --gpus all nvidia/cuda:latest nvidia-smi

Dockerfile

I found what I assume to be the official Dockerfile for nvidia/cuda here I "flattened" it, appended the contents to my Dockerfile and tested it to be working nicely:

FROM sidazhou/scipy-notebook:latest
# FROM ubuntu:18.04 

###########################################################################
# See https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/10.1/ubuntu18.04-x86_64/base/Dockerfile
# See https://sarus.readthedocs.io/en/stable/user/custom-cuda-images.html
###########################################################################
USER root

###########################################################################
# base
RUN apt-get update && apt-get install -y --no-install-recommends \
    gnupg2 curl ca-certificates && \
    curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | apt-key add - && \
    echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/cuda.list && \
    echo "deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list && \
    apt-get purge --autoremove -y curl \
    && rm -rf /var/lib/apt/lists/*

ENV CUDA_VERSION 10.1.243
ENV CUDA_PKG_VERSION 10-1=$CUDA_VERSION-1

# For libraries in the cuda-compat-* package: https://docs.nvidia.com/cuda/eula/index.html#attachment-a
RUN apt-get update && apt-get install -y --no-install-recommends \
    cuda-cudart-$CUDA_PKG_VERSION \
    cuda-compat-10-1 \
    && ln -s cuda-10.1 /usr/local/cuda && \
    rm -rf /var/lib/apt/lists/*

# Required for nvidia-docker v1
RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
    echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf

ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64


###########################################################################
#runtime next
ENV NCCL_VERSION 2.7.8

RUN apt-get update && apt-get install -y --no-install-recommends \
    cuda-libraries-$CUDA_PKG_VERSION \
    cuda-npp-$CUDA_PKG_VERSION \
    cuda-nvtx-$CUDA_PKG_VERSION \
    libcublas10=10.2.1.243-1 \
    libnccl2=$NCCL_VERSION-1+cuda10.1 \
    && apt-mark hold libnccl2 \
    && rm -rf /var/lib/apt/lists/*

# apt from auto upgrading the cublas package. See https://gitlab.com/nvidia/container-images/cuda/-/issues/88
RUN apt-mark hold libcublas10


###########################################################################
#cudnn7 (not cudnn8) next

ENV CUDNN_VERSION 7.6.5.32

RUN apt-get update && apt-get install -y --no-install-recommends \
    libcudnn7=$CUDNN_VERSION-1+cuda10.1 \
    && apt-mark hold libcudnn7 && \
    rm -rf /var/lib/apt/lists/*


ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all
ENV NVIDIA_REQUIRE_CUDA "cuda>=10.1"


###########################################################################
#docker build -t sidazhou/scipy-notebook-gpu:latest .

#docker run -itd -gpus all\
#  -p 8888:8888 \
#  -p 6006:6006 \
#  --user root \
#  -e NB_UID=$(id -u) \
#  -e NB_GID=$(id -g) \
#  -e GRANT_SUDO=yes \
#  -v ~/workspace:/home/jovyan/work \
#  --name sidazhou-jupyter-gpu \
#  sidazhou/scipy-notebook-gpu:latest

#docker exec sidazhou-jupyter-gpu python -c "import tensorflow as tf; print(tf.config.experimental.list_physical_devices('GPU'))"
Sida Zhou
  • 3,529
  • 2
  • 33
  • 48
  • This answer really saved me! I had to install tensorflow-gpu on an existing docker image using ubuntu 16.04 plus a lot of other dependencies and this Dockerfile was the only way to install it cleanly. Note: I had to add a RUN apt-get install apt-transport-https after the first run (so that the later RUN can download from https nvidia urls) and I also removed the apt-getpurge and rm -rf /var/lib/apt/lists/ statements which were apparently causing some trouble. – BlueCoder May 12 '21 at 17:09
  • Careful, you fail to reset the `USER` after installing CUDA. This can lead to issues with the image. Doing `USER $NB_UID` should reset it. Furthermore, I think this can be simplified by using `conda`, which is available inside the `scipy-notebook` image anyway. – Konrad Rudolph Aug 26 '21 at 09:11
  • Oh, and could you also post the Dockerfile source for `sidazhou/scipy-notebook`? – Konrad Rudolph Aug 26 '21 at 11:00
  • 1
    @KonradRudolph It's based from `jupyter/scipy-notebook` with some additional packages installed. I thought it is not relevant to this discussion tho. – Sida Zhou Oct 11 '21 at 10:29
  • @SidaZhou Ah, if it’s just like `jupyter/scipy-notebook` then that’s fine, yes. I haven’t gotten this working for myself but I haven’t worked on this project in over a month and I don’t remember whether I tried basing it off `scipy-notebook`. – Konrad Rudolph Oct 11 '21 at 11:17
3

To use GPU from docker container, instead of using native Docker, use Nvidia-docker. To install Nvidia docker use following commands

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey |  sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu16.04/amd64/nvidia-
docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker
sudo pkill -SIGHUP dockerd # Restart Docker Engine
sudo nvidia-docker run --rm nvidia/cuda nvidia-smi # finally run nvidia-smi in the same container
Patel Sunil
  • 441
  • 7
  • 7
2

Use x11docker by mviereck:

https://github.com/mviereck/x11docker#hardware-acceleration says

Hardware acceleration

Hardware acceleration for OpenGL is possible with option -g, --gpu.

This will work out of the box in most cases with open source drivers on host. Otherwise have a look at wiki: feature dependencies. Closed source NVIDIA drivers need some setup and support less x11docker X server options.

This script is really convenient as it handles all the configuration and setup. Running a docker image on X with gpu is as simple as

x11docker --gpu imagename
phil294
  • 10,038
  • 8
  • 65
  • 98
  • This seems overkill depending on the needs. The primary use of `x11docker` seems to be a GUI, with the option to enable GPU acceleration. – Babyburger Dec 12 '20 at 17:35
2

I would not recommend installing CUDA/cuDNN on the host if you can use docker. Since at least CUDA 8 it has been possible to "stand on the shoulders of giants" and use nvidia/cuda base images maintained by NVIDIA in their Docker Hub repo. Go for the newest and biggest one (with cuDNN if doing deep learning) if unsure which version to choose.

A starter CUDA container:

mkdir ~/cuda11
cd ~/cuda11

echo "FROM nvidia/cuda:11.0-cudnn8-devel-ubuntu18.04" > Dockerfile
echo "CMD [\"/bin/bash\"]" >> Dockerfile

docker build --tag mirekphd/cuda11 .

docker run --rm -it --gpus 1 mirekphd/cuda11 nvidia-smi

Sample output:

(if nvidia-smi is not found in the container, do not try install it there - it was already installed on thehost with NVIDIA GPU driver and should be made available from the host to the container system if docker has access to the GPU(s)):

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.57       Driver Version: 450.57       CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 00000000:01:00.0  On |                  N/A |
|  0%   50C    P8    17W / 280W |    409MiB / 11177MiB |      7%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

Prerequisites

  1. Appropriate NVIDIA driver with the latest CUDA version support to be installed first on the host (download it from NVIDIA Driver Downloads and then mv driver-file.run driver-file.sh && chmod +x driver-file.sh && ./driver-file.sh). These are have been forward-compatible since CUDA 10.1.

  2. GPU access enabled in docker by installing sudo apt get update && sudo apt get install nvidia-container-toolkit (and then restarting docker daemon using sudo systemctl restart docker).

mirekphd
  • 4,799
  • 3
  • 38
  • 59