1

Can we use SimpleTransformers and FineTune their pre-trained model, without an NVIDIA Graphic Card? Like I installed CUDA, still it shows:

model = NERModel('bert', 'bert-base-uncased',labels=label,args =args)
~/my_env/lib/python3.8/site-packages/simpletransformers/ner/ner_model.py in __init__(self, model_type, model_name, labels, weight, args, use_cuda, cuda_device, onnx_execution_provider, **kwargs)
    281                     self.device = torch.device(f"cuda:{cuda_device}")
    282             else:
--> 283                 raise ValueError(
    284                     "'use_cuda' set to True when cuda is unavailable."
    285                     "Make sure CUDA is available or set use_cuda=False."

ValueError: 'use_cuda' set to True when cuda is unavailable. Make sure CUDA is available or set use_cuda=False.
talonmies
  • 70,661
  • 34
  • 192
  • 269
Mohit Bagaria
  • 21
  • 1
  • 2

3 Answers3

2

Unfortunately, you cannot use CUDA without a Nvidia Graphics Card. CUDA is a framework developed by Nvidia that allows people with a Nvidia Graphics Card to use GPU acceleration when it comes to deep learning, and not having a Nvidia graphics card defeats that purpose.

That being said, you can probably dig around the documentation and see if you can specify use_cuda=False as an argument. This should let you finetune the model using your CPU, though be warned that it will take significantly longer than if you trained on a GPU.

Edit: Perhaps you could try something like:

model = NERModel('bert', 'bert-base-cased', use_cuda=False)

This should allow you to finetune on CPU.

Talos0248
  • 190
  • 6
1

From Wikipedia: CUDA (an acronym for Compute Unified Device Architecture) is a parallel computing platform and application programming interface (API) model created by Nvidia.

The answer is most likely no.

Captain Trojan
  • 2,800
  • 1
  • 11
  • 28
1

CUDA is a process created by NVidia specifically for accelerating computation on their graphics cards. If you're using a non-Nvidia graphics card, it will not work (unless there's some really weird emulator or the like). If you have a card such as AMD, there are other options that are similar, namely OpenCl, which may or may not work depending on your situation.

Nether Man
  • 70
  • 1
  • 10
  • The AMD counterpart to NVIDIA CUDA is called ROCm. – paleonix Jul 21 '21 at 20:50
  • @PaulG. I was under the understanding that OpenCL worked with a variety of GPUs, and could also work with AMD? There seems to be drivers for it at least. – Nether Man Jul 22 '21 at 13:10
  • My comment was meant as an addendum. I haven't used either ROCm or OpenCL, but I would guess that ROCm will have more features specifically for AMD GPUs like CUDA has for NVIDIA GPUs. Which one to choose depends on the context, I would say. – paleonix Jul 22 '21 at 15:57