I'm running a shared memory numba code for matrix multiplication, but I think the algorithm to solve it is incorrect as I'm getting incorrect results.
I saw another thread for this code but there the answer remained undisclosed and the code did not work.
The code is below:
# This part is for initializing everything
M = 128
N = 32
a = np.arange(M*N).reshape(M,N).astype(np.int32)
b = np.arange(M*N).reshape(N,M).astype(np.int32)
c = np.zeros((M, M)).astype(np.int32)
d_a = cuda.to_device(a)
d_b = cuda.to_device(b)
d_c = cuda.to_device(c)
block_size = (N,N)
grid_size = (int(M/N),int(M/N))
And this is my kernel definition:
import numpy as np
from numba import cuda, types
@cuda.jit
def fast_matmul(A, B, C):
# Define an array in the shared memory
# The size and type of the arrays must be known at compile time
TPB = N
sA = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
sB = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
x, y = cuda.grid(2)
tx = cuda.threadIdx.x
ty = cuda.threadIdx.y
bpg = cuda.gridDim.x # blocks per grid
if x >= C.shape[0] and y >= C.shape[1]:
# Quit if (x, y) is outside of valid C boundary
return
# Each thread computes one element in the result matrix.
# The dot product is chunked into dot products of TPB-long vectors.
tmp = 0.
for i in range(bpg):
# Preload data into shared memory
sA[tx, ty] = A[x, ty + i * TPB]
sB[tx, ty] = B[tx + i * TPB, y]
# Wait until all threads finish preloading
cuda.syncthreads()
# Computes partial product on the shared memory
for j in range(TPB):
tmp += sA[tx, j] * sB[j, ty]
# Wait until all threads finish computing
cuda.syncthreads()
C[x, y] = tmp
Which I followed from here.
However running the code gives me odd results such as
x: array([[2147483647, 2147483647, 2147483647, ..., 2147483647, 2147483647,
2147483647],
[2147483647, 2147483647, 2147483647, ..., 2147483647, 2147483647,...
when it should be something like:
y: array([[ 1333248, 1333744, 1334240, ..., 1395248, 1395744,
1396240],
[ 3364864, 3366384, 3367904, ..., 3554864, 3556384,...
Could anyone please point out where I am going wrong?