Help me please. Is there any way to start a project with 2.1 cuda capability?
Trying to launch a project https://github.com/CorentinJ/Real-Time-Voice-Cloning/
Found GPU0 GeForce GT 630M which is of cuda capability 2.1.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.
warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
Found 1 GPUs available. Using GPU 0 (GeForce GT 630M) of compute capability 2.1 with 2.1Gb total memory.
Preparing the encoder, the synthesizer and the vocoder...
Traceback (most recent call last):
File "demo_cli.py", line 61, in <module>
encoder.load_model(args.enc_model_fpath)
File "C:\Users\kisel\Desktop\Real-Time-Voice-Cloning-master\encoder\inference.py", line 32, in load_model
_model = SpeakerEncoder(_device, torch.device("cpu"))
File "C:\Users\kisel\Desktop\Real-Time-Voice-Cloning-master\encoder\model.py", line 21, in __init__
batch_first=True).to(device)
File "C:\Users\kisel\anaconda3\envs\TestVoice\lib\site-packages\torch\nn\modules\module.py", line 386, in to
return self._apply(convert)
File "C:\Users\kisel\anaconda3\envs\TestVoice\lib\site-packages\torch\nn\modules\rnn.py", line 127, in _apply
self.flatten_parameters()
File "C:\Users\kisel\anaconda3\envs\TestVoice\lib\site-packages\torch\nn\modules\rnn.py", line 123, in flatten_parameters
self.batch_first, bool(self.bidirectional))
RuntimeError: cuDNN error: CUDNN_STATUS_ARCH_MISMATCH