2

I'm running a shared memory numba code for matrix multiplication, but I think the algorithm to solve it is incorrect as I'm getting incorrect results.

I saw another thread for this code but there the answer remained undisclosed and the code did not work.

The code is below:

# This part is for initializing everything
M = 128
N = 32


a = np.arange(M*N).reshape(M,N).astype(np.int32)
b = np.arange(M*N).reshape(N,M).astype(np.int32)
c = np.zeros((M, M)).astype(np.int32)

d_a = cuda.to_device(a)
d_b = cuda.to_device(b)
d_c = cuda.to_device(c)

block_size = (N,N)
grid_size = (int(M/N),int(M/N))

And this is my kernel definition:

import numpy as np
from numba import cuda, types
@cuda.jit
def fast_matmul(A, B, C):
    # Define an array in the shared memory
    # The size and type of the arrays must be known at compile time
    TPB = N
    
    sA = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
    sB = cuda.shared.array(shape=(TPB, TPB), dtype=float32)

    x, y = cuda.grid(2)

    tx = cuda.threadIdx.x
    ty = cuda.threadIdx.y
    bpg = cuda.gridDim.x    # blocks per grid

    if x >= C.shape[0] and y >= C.shape[1]:
        # Quit if (x, y) is outside of valid C boundary
        return

    # Each thread computes one element in the result matrix.
    # The dot product is chunked into dot products of TPB-long vectors.
    tmp = 0.
    for i in range(bpg):
        # Preload data into shared memory
        sA[tx, ty] = A[x, ty + i * TPB]
        sB[tx, ty] = B[tx + i * TPB, y]

        # Wait until all threads finish preloading
        cuda.syncthreads()

        # Computes partial product on the shared memory
        for j in range(TPB):
            tmp += sA[tx, j] * sB[j, ty]

        # Wait until all threads finish computing
        cuda.syncthreads()

    C[x, y] = tmp

Which I followed from here.

However running the code gives me odd results such as

x: array([[2147483647, 2147483647, 2147483647, ..., 2147483647, 2147483647,
        2147483647],
       [2147483647, 2147483647, 2147483647, ..., 2147483647, 2147483647,...

when it should be something like:

 y: array([[  1333248,   1333744,   1334240, ...,   1395248,   1395744,
          1396240],
       [  3364864,   3366384,   3367904, ...,   3554864,   3556384,...

Could anyone please point out where I am going wrong?

paleonix
  • 2,293
  • 1
  • 13
  • 29
Ilknur Mustafa
  • 301
  • 2
  • 11

1 Answers1

1
  1. you're passing int32 arrays to a numba kernel that is expecting float32 data. Don't do that.
  2. you've also not actually shown your kernel launch. Please provide a complete code.
  3. its not clear what x and y are. Your code produces only one result, and it is in d_c.
  4. I also suspect your input data will overflow int32 type. You should probably convert to float32 throughout. Using arange makes it hard to verify numerical correctness quickly/visually. While you're trying to get things working just use ones instead.

Here's a version of what you've show, with the above ideas taken into account:

$ cat t35.py
import numpy as np
from numba import cuda, types, float32
@cuda.jit
def fast_matmul(A, B, C):
    # Define an array in the shared memory
    # The size and type of the arrays must be known at compile time
    TPB = N

    sA = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
    sB = cuda.shared.array(shape=(TPB, TPB), dtype=float32)

    x, y = cuda.grid(2)

    tx = cuda.threadIdx.x
    ty = cuda.threadIdx.y
    bpg = cuda.gridDim.x    # blocks per grid

    if x >= C.shape[0] and y >= C.shape[1]:
        # Quit if (x, y) is outside of valid C boundary
        return

    # Each thread computes one element in the result matrix.
    # The dot product is chunked into dot products of TPB-long vectors.
    tmp = 0.
    for i in range(bpg):
        # Preload data into shared memory
        sA[tx, ty] = A[x, ty + i * TPB]
        sB[tx, ty] = B[tx + i * TPB, y]

        # Wait until all threads finish preloading
        cuda.syncthreads()

        # Computes partial product on the shared memory
        for j in range(TPB):
            tmp += sA[tx, j] * sB[j, ty]

        # Wait until all threads finish computing
        cuda.syncthreads()

    C[x, y] = tmp

# This part is for initializing everything
M = 128
N = 32


#a = np.arange(M*N).reshape(M,N).astype(np.float32)
#b = np.arange(M*N).reshape(N,M).astype(np.float32)
a = np.ones(M*N).reshape(M,N).astype(np.float32)
b = np.ones(M*N).reshape(N,M).astype(np.float32)
c = np.zeros((M, M)).astype(np.float32)

d_a = cuda.to_device(a)
d_b = cuda.to_device(b)
d_c = cuda.to_device(c)

block_size = (N,N)
grid_size = (int(M/N),int(M/N))

fast_matmul[grid_size,block_size](d_a, d_b, d_c)
c = d_c.copy_to_host()
print(c)
$ python t35.py
[[32. 32. 32. ... 32. 32. 32.]
 [32. 32. 32. ... 32. 32. 32.]
 [32. 32. 32. ... 32. 32. 32.]
 ...
 [32. 32. 32. ... 32. 32. 32.]
 [32. 32. 32. ... 32. 32. 32.]
 [32. 32. 32. ... 32. 32. 32.]]
$

I believe 32 is the correct answer here.

Also note the posted example from which this is derived probably has some errors, see here

Robert Crovella
  • 143,785
  • 11
  • 213
  • 257