1

I'm trying to run an example of the detectron2 inference. I've installed torch 1.17.1 with cuda 11.0 pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html. I've also installed the pre-built detecron2 with the following command python3 -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu110/torch1.7/index.html. nvidia-sumi shows the following:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce 920M        Off  | 00000000:04:00.0 N/A |                  N/A |
| N/A   38C    P0    N/A /  N/A |    259MiB /  2004MiB |     N/A      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

When I try to run the following code:

import tkinter as tk
from tkinter import filedialog
import cv2

from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog

root = tk.Tk()
root.withdraw()

# Open a file dialog and display the selected image
file_path = filedialog.askopenfilename(title="Choose an image")
input_image = cv2.imread(file_path)

# Show image
cv2.imshow('Input', input_image)

cfg = get_cfg()
# add project-specific config (e.g., TensorMask) here if you're not running a model in detectron2's core library
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5  # set threshold for this model
# Find a model from detectron2's model zoo. You can use the https://dl.fbaipublicfiles... url as well
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
predictor = DefaultPredictor(cfg)
outputs = predictor(input_image)

# look at the outputs. See https://detectron2.readthedocs.io/tutorials/models.html#model-output-format for specification
print(outputs["instances"].pred_classes)
print(outputs["instances"].pred_boxes)

# We can use `Visualizer` to draw the predictions on the image.
v = Visualizer(input_image[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2.imshow('Output', out.get_image()[:, :, ::-1])

# wait for exit
if cv2.waitKey(0) & 0xff == 27:
    cv2.destroyAllWindows()

I get the following RuntimeError:

Traceback (most recent call last):
  File "/home/ion/Documents/Detectron2_test/main.py", line 28, in <module>
    outputs = predictor(input_image)
  File "/home/ion/.local/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 223, in __call__
    predictions = self.model([inputs])[0]
  File "/home/ion/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/ion/.local/lib/python3.8/site-packages/detectron2/modeling/meta_arch/rcnn.py", line 149, in forward
    return self.inference(batched_inputs)
  File "/home/ion/.local/lib/python3.8/site-packages/detectron2/modeling/meta_arch/rcnn.py", line 197, in inference
    images = self.preprocess_image(batched_inputs)
  File "/home/ion/.local/lib/python3.8/site-packages/detectron2/modeling/meta_arch/rcnn.py", line 222, in preprocess_image
    images = [(x - self.pixel_mean) / self.pixel_std for x in images]
  File "/home/ion/.local/lib/python3.8/site-packages/detectron2/modeling/meta_arch/rcnn.py", line 222, in <listcomp>
    images = [(x - self.pixel_mean) / self.pixel_std for x in images]
RuntimeError: CUDA error: no kernel image is available for execution on the device

I don't know if there is a problem with my gpu or something else but I could't figure it out by now.

talonmies
  • 70,661
  • 34
  • 192
  • 269
ion
  • 11
  • 1

0 Answers0