I was wondering if someone could show me how to use loop tiling/loop blocking for large dense matrix multiplication effectively. I am doing C = AB with 1000x1000 matrices. I have followed the example on Wikipedia for loop tiling but I get worse results using tiling than without.
http://en.wikipedia.org/wiki/Loop_tiling
I have provided some code below. The naive method is very slow due to cache misses. The transpose method creates the transpose of B in a buffer. This method gives the fastest result (matrix multiplication goes as O(n^3) and transpose as O(n^2) so doing the transpose is at least 1000x faster). The wiki method without blocking is also fast and does not need a buffer. The blocking method is slower. Another problem with blocking is it has to update the block several times. This is a challenge for threading/OpenMP because it can cause race conditions if one is not careful.
I should point out that using AVX with a modification of the transpose method I get results faster than Eigen. However, my results with SSE are a bit slower than Eigen so I think I could use caching better.
Edit:
I think I have an idea what I want to do. It comes from Cannon's algorithm from 1969.
http://en.wikipedia.org/wiki/Matrix_multiplication#Communication-avoiding_and_distributed_algorithms
I need to do block matrix multiplication and have each thread handle a sub-matrix of C rather than A and B. For example if I divide my matrices into four blocks. I would do:
+-+ +-+ +-+ +-+ +-+ +-+
| | | | | |
| C1 C2 | | A1 A2 | | B1 B2 |
| | = | | x | |
| C3 C4 | | A3 A4 | | B3 B4 |
| | | | | |
+-+ +-+ +-+ +-+ +-+ +-+
thread 1: C1 = A1B1 + A2B3
thread 2: C2 = A1B2 + A2B4
thread 3: C3 = A3B1 + A4B3
thread 4: C4 = A3B2 + A4B4
That removes the race condition. I'll have to think about this.
void matrix_mult_naive(const float*A , const float* B, float* C, const int N, const int M, const int K) {
#pragma omp parallel for
for(int i=0; i<N; i++) {
for(int j=0; j<K; j++) {
float tmp = 0;
for(int l=0; l<M; l++) {
tmp += A[M*i+l]*B[K*l+j];
}
C[K*i + j] = tmp;
}
}
}
void matrix_mult_transpose(const float*A , const float* B, float* C, const int N, const int M, const int K) {
float *B2 = (float*)aligned_malloc(M*K*sizeof(float), 32);
transpose(B, B2, M, K, 1);
#pragma omp parallel for
for(int i=0; i<N; i++) {
for(int j=0; j<K; j++) {
float tmp = 0;
for(int l=0; l<M; l++) {
tmp += A[M*i+l]*B2[M*j+l];
}
C[K*i + j] = tmp;
}
}
aligned_free(B2);
}
void matrix_mult_wiki(const float*A , const float* B, float* C, const int N, const int M, const int K) {
for(int i=0; i<N; i++) {
for(int j=0; j<K; j++) {
C[K*i + j] = 0;
}
}
#pragma omp parallel for
for(int i=0; i<N; i++) {
for(int l=0; l<M; l++) {
float a = A[M*i+l];
for(int j=0; j<K; j++) {
C[K*i + j] += a*B[K*l+j];
}
}
}
}
void matrix_mult_wiki_block(const float*A , const float* B, float* C, const int N, const int M, const int K) {
const int block_size = 8; //I have tried several different block sizes
for(int i=0; i<N; i++) {
for(int j=0; j<K; j++) {
C[K*i + j] = 0;
}
}
for(int l2=0; l2<M; l2+=block_size) {
for(int j2=0; j2<K; j2+=block_size) {
#pragma omp parallel for
for(int i=0; i<N; i++) {
for(int l=l2; l<min(M, l2+block_size); l++) {
for(int j=j2; j<min(K, j2+block_size); j++) {
C[K*i + j] += A[M*i+l]*B[K*l+j];
}
}
}
}
}
}