15

I have machine with several GPUs. My idea is to attach them to different docker instances in order to use that instances in CUDA (or OpenCL) calculations.

My goal is to setup docker image with quite old Ubuntu and quite old AMD video drivers (13.04). Reason is simple: upgrade to newer version of driver will broke my OpenCL program (due to buggy AMD linux drivers).

So question is following. Is it possible to run docker image with old Ubuntu, old kernel (3.14 for example) and old AMD (fglrx) driver on fresh Arch Linux setup with fresh kernel 4.2 and newer AMD (fglrx) drivers in repository?

P.S. I tried this answer (with Nvidia cards) and unfortunately deviceQuery inside docker image doesn't see any CUDA devices (as It happened with some commentors of original answer)...

P.P.S. My setup:

  1. CPU: Intel Xeon E5-2670
  2. GPUs:

    • 1 x Radeon HD 7970

       $ lspci -nn | grep Rad
         83:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002:6798]
         83:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0]
      
    • 2 x GeForce GTX Titan Black

Community
  • 1
  • 1
petRUShka
  • 9,812
  • 12
  • 61
  • 95
  • 3
    I'm pretty sure all docker containers on a particular machine [must use the same kernel](http://stackoverflow.com/questions/25444099/why-docker-has-ability-to-run-different-linux-distribution) as the host. You can run an Ubuntu "image" on an Arch Linux setup, but they must use the same kernel (the host kernel). I think your question about deviceQuery inside a docker image is a separate issue. It's not clear to me that this is a programming question at all. – Robert Crovella Oct 14 '15 at 17:24
  • Some apps strongly depend on kernel version (and therefore capabilities). So how it is possible to share kernel (if they are very different)? Is it true that I should run the same "image" OS as host os? I think it is quite programming question at least in DevOps sense. – petRUShka Oct 14 '15 at 18:01

1 Answers1

2

With docker you rely on virtualization on Operating System level. That means you use the same kernel in all containers. If you wish to run different kernels for each container, you'll probably have to use system-level virtualization, e.g., KVM, VirtualBox. If your setup supports Intel's VT-d, you can pass the GPU as a PCIe device to the container(better terminology in this case is, Virtual Machine).

hbogert
  • 4,198
  • 5
  • 24
  • 38