4

I am trying to perform inference with the onnxruntime-gpu. Therefore, I installed CUDA, CUDNN and onnxruntime-gpu on my system, and checked that my GPU was compatible (versions listed below).

When I attempt to start an inference session, I receive the following warning:

>>> import onnxruntime as rt
>>> rt.get_available_providers()
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
>>> rt.InferenceSession("[ PATH TO MODEL .onnx]", providers= ['CUDAExecutionProvider'])
2023-01-31 09:07:03.289984495 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:578 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
<onnxruntime.capi.onnxruntime_inference_collection.InferenceSession object at 0x7f740b4af100>

However, if I import torch first, inference runs on my GPU, and I see my python program listed under nvidia-smi as soon as I start the inference session:

$ python
Python 3.8.16 (default, Dec  7 2022, 01:12:06)
[GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import onnxruntime as rt
>>> sess = rt.InferenceSession("PATH TO MODEL . onnx", providers=['CUDAExecutionProvider'])
>>>

Does anyone know why this is the case? The import order is important; if I import torch after importing the onnxruntime, I receive the same warning as if I hadn't imported torch.

I checked the __init__ of the torch package, and tracked down the helpful lines of code to loading libtorch_global_deps.so:


import ctypes
lib_path = '[ path to my .venv38]/lib/python3.8/site-packages/torch/lib/libtorch_global_deps.so'
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
$ python
Python 3.8.16 (default, Dec  7 2022, 01:12:06)
[GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ctypes
>>> lib_path = '[ path to my .venv38]/lib/python3.8/site-packages/torch/lib/libtorch_global_deps.so'
>>> ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
>>> import onnxruntime as rt
>>> sess = rt.InferenceSession("PATH TO MODEL . onnx", providers=['CUDAExecutionProvider'])
>>>

also does the trick.

Installed versions

  • NVIDIA-SMI 510.108.03
  • Driver Version: 510.108.03
  • CUDA Version: 11.6
  • CuDNN Version: cudnn-11.4-linux-x64-v8.2.4.15
  • onnx==1.12.0
  • onnxruntime-gpu==1.13.1
  • torch==1.12.1+cu116
  • torchvision==0.13.1+cu116
  • Python version 3.8
  • Ubuntu 22.04 5.19.3-051903-generic

Python packages installed in a virtual environemnt.

mutableVoid
  • 1,284
  • 2
  • 11
  • 29

0 Answers0