1

I have a python function that is slow and I believe it would be faster if processed on a GPU / TPU. I am using Google Colab. How do I modify it so that I can use Numba to process it on the Google Colab GPU?

def Tournament (x, w, cutoff, y):
  preds = np.matmul(x,w)
  preds2 = (preds<np.quantile(preds, 1-cutoff,axis=0))
  preds = (preds>np.quantile(preds, cutoff,axis=0))
  rets = y * preds
  rets[rets == 0] = np.nan
  rets2 = y * preds2
  rets2[rets == 0] = np.nan
  ans = np.nanmedian(rets, axis=0) - np.nanmedian(rets2,axis=0)
  ans = ans[:,None]
  rand2 = np.random.uniform(-0.00000001,0.00000001,size=(popsize,1))
  ans += rand2
  garbage,sort = np.unique(ans,return_index = True)
  sort = sort[:,None]
  return sort, ans

I've tried loading the Numba library and using @jit before the function but it doesn't seem to work.

xxanissrxx
  • 139
  • 9

0 Answers0