4

I'm new to google colab.

I'm trying to do deep learning there.

I have written a class to create and train a LSTM net using just python - not any specific deep learning library as tensorflow, pytorch, etc.

I thought I was using a gpu because I had chosen the runtime type properly in colab.

During the code execution, however, I was sometimes getting the message to quit gpu mode because I was not making use of it.

So, my question: how can one use google colab gpu, using just plain python, without special ai libraries? Is there something like "decorator code" to put in my original code so that the gpu get activated?

2 Answers2

1

It's just easier to use frameworks like PyTorch or Tensorflow.

If not, you can try pycuda or numba, which are closer to "pure" GPU programming. That's even harder than just using PyTorch.

korakot
  • 37,818
  • 16
  • 123
  • 144
  • Thank you for your answer. Say that I translate my pure python code to tensorflow, writing my own back-porpagation methods in tensorflow. Do you think I would have the gpu or tpu benefit yet not using the tensorflow optimization tools? – Alessandro Pereira Rodrigues Mar 23 '20 at 13:58
  • No, you won't. Maybe you can do it fast with Swift for TensorFlow. – korakot Mar 23 '20 at 15:55
1

make sure that Nvidia drivers are up to date also you can install Cuda toolkit(not sure you need in collab)

also numba

you can use conda to install them if you want

example


conda install numba & conda install cudatoolkit
or
pip install numba

We will use the numba.jit decorator for the function we want to compute over the GPU. The decorator has several parameters but we will work with only the target parameter. Target tells the jit to compile codes for which source(“CPU” or “Cuda”). “Cuda” corresponds to GPU. However, if CPU is passed as an argument then the jit tries to optimize the code run faster on CPU and improves the speed too.


from numba import jit, cuda 
import numpy as np

@jit(target ="cuda")                          
def func(a): 
    for i in range(10000000): 
        a[i]+= 1
Paritosh Yadav
  • 337
  • 1
  • 3
  • 11
  • Thanks for you reply. But when I try to use @jit(target ="cuda"), I get the error " NotImplementedError: bounds checking is not supported for CUDA" that stops the execution of the code. That same error yet says "UserWarning: autojit is deprecated and will be removed in a future release. Use jit instead.". If instead I use just @jit, the execution goes until its end but gives a giant warning – Alessandro Pereira Rodrigues Mar 17 '20 at 21:52
  • Continuing the previous comment. I'm not sure the cause of that giant warning isn't preventing the intended acceleration of the code to occur. – Alessandro Pereira Rodrigues Mar 17 '20 at 22:09
  • Sorry for commenting so many times, but now I know for sure that the colab gpu are not being used even deploying the @jit decorator as indicated in the first comment. I'm still getting that same message to quit gpu mode because I am not making use of it. – Alessandro Pereira Rodrigues Mar 18 '20 at 00:43
  • @AlessandroPereiraRodrigues sorry for the late reply can you tell me the version of Cuda you are using – Paritosh Yadav Mar 18 '20 at 12:31
  • can you try it using with Cuda version 8, I believe collab is installed (default) Cuda 9.0+ – Paritosh Yadav Mar 18 '20 at 12:34
  • The Colab installed CUDA version is 10.1.243. I have got the following code to install the version 8 in colab: "!pip install mxnet-cu80". But, I don't know if I'm on the right way. Could you please give me the details of the CUDA version 8 installation, import, and use? Will I have to change my methods code internally to use CUDA? Thank you very much for your help until now. – Alessandro Pereira Rodrigues Mar 19 '20 at 03:53
  • https://stackoverflow.com/a/51001727/11266109 , just ignore mxnet part i.e point 8 and 9 in this answer – Paritosh Yadav Mar 19 '20 at 06:33
  • for version 8 you can use this link https://developer.nvidia.com/cuda-80-ga2-download-archive or select from https://developer.nvidia.com/cuda-toolkit-archive – Paritosh Yadav Mar 19 '20 at 06:37