3

I'm trying to use whisper AI on my computer. I have a NVIDIA GPU RTX 2060, installed CUDA and FFMPEG.

I'm running this code :

import whisper

model = whisper.load_model("medium")
result = model.transcribe("venv/files/test1.mp3")
print(result["text"])

and having issue :

whisper\transcribe.py:114: UserWarning: FP16 is not supported on CPU; using FP32 instead
  warnings.warn("FP16 is not supported on CPU; using FP32 instead")

I don't understand why FP16 is not support since I have a good GPU and everything installed. Any help would be appreciated. Thanks.

I installed all the requirement and I was expecting that whisper AI would use the GPU

Nick ODell
  • 15,465
  • 3
  • 32
  • 66

2 Answers2

4

You could try this:

result = model.transcribe("venv/files/test1.mp3", fp16=False)

That helps me!

Rok Benko
  • 14,265
  • 2
  • 24
  • 49
0

In order to utilize CUDA with whisper, you have to:

  1. Uninstall existing pytorch.
  2. Install pytorch with CUDA.
  3. Load pytorch library
  4. Add a chained call to to method after original load_model.

Full example

Terminal

pip3 uninstall -y torch torchvision torchaudio
# following command was generated using https://pytorch.org/get-started/locally/#with-cuda-1
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 

file.py

import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
whisper.load_model('medium').to(device)
Dorad
  • 3,413
  • 2
  • 44
  • 71