32

I'm using a laptop which has Intel Corporation HD Graphics 5500 (rev 09), and AMD Radeon r5 m255 graphics card.

Does anyone know how to it set up for Deep Learning, specifically fastai/Pytorch?

Mohanned ElSayed
  • 321
  • 1
  • 3
  • 3

2 Answers2

33

Update 3:

Since late 2020, torch-mlir project has come a long way and now supports all major Operating systems. Using torch-mlir you can now use your AMD, NVIDIA or Intel GPUs with the latest version of Pytorch. You can download the binaries for your OS from here.

Update 2:

Since October 21, 2021, You can use DirectML version of Pytorch.
DirectML is a high-performance, hardware-accelerated DirectX 12 based library that provides GPU acceleration for ML based tasks. It supports all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.

Update:
For latest version of PyTorch with DirectML see: torch-directml
you can install the latest version using pip:

pip install torch-directml

For detailed explanation on how to setup everything see Enable PyTorch with DirectML on Windows.

side note concerning pytorch-directml:
Microsoft has changed the way it released pytorch-directml. it deprecated the old 1.8 version and now the offers the new torch-directml(as apposed to the previously called pytorch-directml).
It is now installed as a plugin for the actual version of Pytorch and works align side it.

Old version:
The initial release of pytorch-directml (Oct 21, 2021):

Microsoft has release Pytorch_DML a few hours ago. You can now install it (in windows or WSL) using pypi package:
pytorch-directml 1.8.0a0.dev211021
pip install pytorch-directml

So if you are on windows or using WSL, you can hop in and give this a try!

Update :

As of Pytorch 1.8 (March 04, 2021), AMD ROCm versions are made available from Pytorch's official website. You can now easily install them on Linux and Mac, the same way you used to install the CUDA/CPU versions.

Currently, the pip packages are being provided only. Also, the Mac and Windows platforms are still not supported (I haven't tested with WSL2 though!)

Old answer:

You need to install the ROCm version. The official AMD instructions on building Pytorch is here.

There was previously a wheel package for rocm, but it seems AMD doesn't distribute that anymore, and instead, you need to build PyTorch from the source as the guide which I linked to explains.

However, you may consult this page, to build the latest PyTorch version: The unofficial page of ROCm/PyTorch.

Hossein
  • 24,202
  • 35
  • 119
  • 224
  • Is there any support the Vega 11? I only see the Vega 10 in that list. – Simd Aug 02 '20 at 07:36
  • I dont know, there seem to be an issue at their repo, might be worth having a look[link](https://github.com/RadeonOpenCompute/ROCm/issues/949) – Hossein Aug 02 '20 at 12:32
  • @hossein, could you explain me or help me to use "torch-mlir" in Windows for convert a Cuda project to DirectMl for use AMD card. I am trying use my AMD in some AI projects but i cant, I've seen this alternative, but i am not sure how use it. Try it but only CPU is used. What should i do? – Milor123 May 08 '23 at 21:15
  • @Milor123 comments section is not a good place for these questions, please either ask a separate question detailing what you've exactly done and what's wrong, or kindly refer to the official torchmlir discord or it's GitHub repositorys issues. – Hossein May 11 '23 at 15:44
  • @Milor123 also note that torchmlir and directml are two completely separate projects. Also it doesn't convert your cuda codes to some other codebase so to speak, please refer to the documentations for further clarifications and then ask a new question if the problem still persists. – Hossein May 12 '23 at 08:12
6

Update: In March 2021, Pytorch added support for AMD GPUs, you can just install it and configure it like every other CUDA based GPU. Here is the link

Don't know about PyTorch but, Even though Keras is now integrated with TF, you can use Keras on an AMD GPU using a library PlaidML link! made by Intel. It's pretty cool and easy to set up plus it's pretty handy to switch the Keras backends for different projects

Prhyme
  • 121
  • 1
  • 10