28

I'm trying to compare different methods for matrix multiplication. The first one is normal method:

do
{
    for (j = 0; j < i; j++)
    {
        for (k = 0; k < i; k++)
        {
            suma = 0;
            for (l = 0; l < i; l++)
                suma += MatrixA[j][l]*MatrixB[l][k];
                MatrixR[j][k] = suma;
            }
        }
    }
    c++;
} while (c<iteraciones);

The second one consist of transposing the matrix B first and then do the multiplication by rows:

int f, co;
for (f = 0; f < i; f++) {
    for ( co = 0; co < i; co++) {
        MatrixB[f][co] = MatrixB[co][f];
    }
}

c = 0;
do
{
    for (j = 0; j < i; j++)
    {
        for (k = 0; k < i; k++)
        {
            suma = 0;
            for (l = 0; l < i; l++)
                suma += MatrixA[j][l]*MatrixB[k][l];
                MatrixR[j][k] = suma;
            }
        }
     }
     c++;
} while (c<iteraciones);

The second method supposed to be much faster, because we are accessing contiguous memory slots, but I'm not getting a significant improvement in the performance. Am I doing something wrong?

I can post the complete code, but I think is not needed.

UpAndAdam
  • 4,515
  • 3
  • 28
  • 46
Peter
  • 985
  • 3
  • 12
  • 14
  • 4
    Unless you are implementing your own matrix multiplication routines as a learning exercise you should seriously consider using an existing, vetted, optimized library such as [BLAS](http://www.netlib.org/blas/) or [LAPACK](http://www.netlib.org/lapack/). – maerics Feb 21 '13 at 16:33
  • The first fragment has 3 `{` and 4 `}`. My impression is that the innermost `}` is not wanted at all, and the assignment `MatrixR[j][k] = suma;` is not part of the innermost `for` loop, despite the indentation (so it is mis-indented; it should be at the same level as `suma = 0;`). – Jonathan Leffler Aug 05 '18 at 05:47
  • 1
    This answer might be helpful: https://stackoverflow.com/a/54546544/3234205 – Jianyu Huang Feb 06 '19 at 04:34

14 Answers14

27

What Every Programmer Should Know About Memory (pdf link) by Ulrich Drepper has a lot of good ideas about memory efficiency, but in particular, he uses matrix multiplication as an example of how knowing about memory and using that knowledge can speed this process. Look at appendix A.1 in his paper, and read through section 6.2.1. Table 6.2 in the paper shows that he could get his running time to be 10% from a naive implementation's time for a 1000x1000 matrix.

Granted, his final code is pretty hairy and uses a lot of system-specific stuff and compile-time tuning, but still, if you really need speed, reading that paper and reading his implementation is definitely worth it.

Alok Singhal
  • 93,253
  • 21
  • 125
  • 158
16

Getting this right is non-trivial. Using an existing BLAS library is highly recommended.

Should you really be inclined to roll your own matrix multiplication, loop tiling is an optimization that is of particular importance for large matrices. The tiling should be tuned to the cache size to ensure that the cache is not being continually thrashed, which will occur with a naive implementation. I once measured a 12x performance difference tiling a matrix multiply with matrix sizes picked to consume multiples of my cache (circa '97 so the cache was probably small).

Loop tiling algorithms assume that a contiguous linear array of elements is used, as opposed to rows or columns of pointers. With such a storage choice, your indexing scheme determines which dimension changes fastest, and you are free to decide whether row or column access will have the best cache performance.

There's a lot of literature on the subject. The following references, especially the Banerjee books, may be helpful:

[Ban93] Banerjee, Utpal, Loop Transformations for Restructuring Compilers: the Foundations, Kluwer Academic Publishers, Norwell, MA, 1993.

[Ban94] Banerjee, Utpal, Loop Parallelization, Kluwer Academic Publishers, Norwell, MA, 1994.

[BGS93] Bacon, David F., Susan L. Graham, and Oliver Sharp, Compiler Transformations for High-Performance Computing, Computer Science Division, University of California, Berkeley, Calif., Technical Report No UCB/CSD-93-781.

[LRW91] Lam, Monica S., Edward E. Rothberg, and Michael E Wolf. The Cache Performance and Optimizations of Blocked Algorithms, In 4th International Conference on Architectural Support for Programming Languages, held in Santa Clara, Calif., April, 1991, 63-74.

[LW91] Lam, Monica S., and Michael E Wolf. A Loop Transformation Theory and an Algorithm to Maximize Parallelism, In IEEE Transactions on Parallel and Distributed Systems, 1991, 2(4):452-471.

[PW86] Padua, David A., and Michael J. Wolfe, Advanced Compiler Optimizations for Supercomputers, In Communications of the ACM, 29(12):1184-1201, 1986.

[Wolfe89] Wolfe, Michael J. Optimizing Supercompilers for Supercomputers, The MIT Press, Cambridge, MA, 1989.

[Wolfe96] Wolfe, Michael J., High Performance Compilers for Parallel Computing, Addison-Wesley, CA, 1996.

Peeter Joot
  • 7,848
  • 7
  • 48
  • 82
7

ATTENTION: You have a BUG in your second implementation

for (f = 0; f < i; f++) {
    for (co = 0; co < i; co++) {
        MatrixB[f][co] = MatrixB[co][f];
    }
}

When you do f=0, c=1

        MatrixB[0][1] = MatrixB[1][0];

you overwrite MatrixB[0][1] and lose that value! When the loop gets to f=1, c=0

        MatrixB[1][0] = MatrixB[0][1];

the value copied is the same that was already there.

pmg
  • 106,608
  • 13
  • 126
  • 198
  • Oh! I see, should I use a tmp matrix? That will increase the time even more than the first method – Peter Dec 15 '09 at 14:41
  • You can transpose the matrix with one temporary variable: `for (f=0; f – pmg Dec 16 '09 at 08:58
5

You should not write matrix multiplication. You should depend on external libraries. In particular you should use the GEMM routine from the BLAS library. GEMM often provides the following optimizations

Blocking

Efficient Matrix Multiplication relies on blocking your matrix and performing several smaller blocked multiplies. Ideally the size of each block is chosen to fit nicely into cache greatly improving performance.

Tuning

The ideal block size depends on the underlying memory hierarchy (how big is the cache?). As a result libraries should be tuned and compiled for each specific machine. This is done, among others, by the ATLAS implementation of BLAS.

Assembly Level Optimization

Matrix multiplicaiton is so common that developers will optimize it by hand. In particular this is done in GotoBLAS.

Heterogeneous(GPU) Computing

Matrix Multiply is very FLOP/compute intensive, making it an ideal candidate to be run on GPUs. cuBLAS and MAGMA are good candidates for this.

In short, dense linear algebra is a well studied topic. People devote their lives to the improvement of these algorithms. You should use their work; it will make them happy.

MRocklin
  • 55,641
  • 23
  • 163
  • 235
  • That is _only_ true if the matrices are large enough to justify the usage of more complex algorithms implemented in BLAS (AFAIK there is no fast path for small matrices). But +1 for citing the GEMM ("general matrix multiplication") routines, and explaining why it helps. – mctylr May 23 '14 at 14:21
4

If the matrix is not large enough or you don't repeat the operations a high number of times you won't see appreciable differences.

If the matrix is, say, 1,000x1,000 you will begin to see improvements, but I would say that if it is below 100x100 you should not worry about it.

Also, any 'improvement' may be of the order of milliseconds, unless yoy are either working with extremely large matrices or repeating the operation thousands of times.

Finally, if you change the computer you are using for a faster one the differences will be even narrower!

Pablo Rodriguez
  • 582
  • 4
  • 18
1

The computation complexity of multiplication of two N*N matrix is O(N^3). The performance will be dramatically improved if you use O(N^2.73) algorithm which probably has been adopted by MATLAB. If you installed a MATLAB, try to multiply two 1024*1024 matrix. On my computer, MATLAB complete it in 0.7s, but the C\C++ implementation of the naive algorithm like yours takes 20s. If you really care about the performance, refer to lower-complex algorithms. I heard there exists O(N^2.4) algorithm, however it needs a very large matrix so that other manipulations can be neglected.

1

Can you post some data comparing your 2 approaches for a range of matrix sizes ? It may be that your expectations are unrealistic and that your 2nd version is faster but you haven't done the measurements yet.

Don't forget, when measuring execution time, to include the time to transpose matrixB.

Something else you might want to try is comparing the performance of your code with that of the equivalent operation from your BLAS library. This may not answer your question directly, but it will give you a better idea of what you might expect from your code.

user229044
  • 232,980
  • 40
  • 330
  • 338
High Performance Mark
  • 77,191
  • 7
  • 105
  • 161
1

How big improvements you get will depend on:

  1. The size of the cache
  2. The size of a cache line
  3. The degree of associativity of the cache

For small matrix sizes and modern processors it's highly probable that the data fron both MatrixA and MatrixB will be kept nearly entirely in the cache after you touch it the first time.

Andreas Brinck
  • 51,293
  • 14
  • 84
  • 114
1

Just something for you to try (but this would only make a difference for large matrices): seperate out your addition logic from the multiplication logic in the inner loop like so:

for (k = 0; k < i; k++)
{
    int sums[i];//I know this size declaration is illegal in C. consider 
            //this pseudo-code.
    for (l = 0; l < i; l++)
        sums[l] = MatrixA[j][l]*MatrixB[k][l];

    int suma = 0;
    for(int s = 0; s < i; s++)
       suma += sums[s];
}

This is because you end up stalling your pipeline when you write to suma. Granted, much of this is taken care of in register renaming and the like, but with my limited understanding of hardware, if I wanted to squeeze every ounce of performance out of the code, I would do this because now you don't have to stall the pipeline to wait for a write to suma. Since multiplication is more expensive than addition, you want to let the machine paralleliz it as much as possible, so saving your stalls for the addition means you spend less time waiting in the addition loop than you would in the multiplication loop.

This is just my logic. Others with more knowledge in the area may disagree.

San Jacinto
  • 8,774
  • 5
  • 43
  • 58
  • 1
    Comment 1: If your concerned about the instruction pipeline, consider unrolling the loops. Reducing the number of branches will speed up execution far more then separating multiplication from addition (some processors have single instructions that multiply and add). – Thomas Matthews Dec 15 '09 at 19:33
  • Comment 2: Move the `sums` array and the `suma` variables outside the scope of the `for` loop. Also consider moving the index variables `l` and `s` too. When they are inside the outer `for` loop they will created and destroyed for every iteration of the outer `for` loop. Removing unnecessary instructions will also speed up execution. – Thomas Matthews Dec 15 '09 at 19:36
  • 1
    i was actually assuming the compiler would unroll the loops because that's pretty common, but i think it's rather unlikely that it will cache the multiplications separately for you, as it's difficult to determine the semantics of a reduction. Maybe i'm worng and every compiler does it. for your second comment, you are incorrect. the semantics in the code show that the memory is coming off of the stack, which means it is set aside at the beginning of the function, one time. – San Jacinto Dec 15 '09 at 19:51
  • 1
    Also, in regard to your first comment, I think you are missing the point. It's not atomicity of the mult we are concerned about, it's the time it takes to execute a mult. Modern processors can still take tens of times longer to do a mult than an add. So, if you stall the pipeline until you have a value to put into suma, you very well may be waiting 40 cycles+ longer each iteration of the loop than you need to, as opposed to only stalling once or twice (if you unroll the loop, or the compiler does). – San Jacinto Dec 16 '09 at 00:31
1

not so special but better :

    c = 0;
do
{
    for (j = 0; j < i; j++)
    {
        for (k = 0; k < i; k++)
        {
            sum = 0; sum_ = 0;
            for (l = 0; l < i; l++) {
                MatrixB[j][k] = MatrixB[k][j];
                sum += MatrixA[j][l]*MatrixB[k][l];
                l++;
                MatrixB[j][k] = MatrixB[k][j];
                sum_ += MatrixA[j][l]*MatrixB[k][l];

                sum += sum_;
            }
            MatrixR[j][k] = sum;
        }
     }
     c++;
} while (c<iteraciones);
1

Generally speaking, transposing B should end up being much faster than the naive implementation, but at the expense of wasting another NxN worth of memory. I just spent a week digging around matrix multiplication optimization, and so far the absolute hands-down winner is this:

for (int i = 0; i < N; i++)
    for (int k = 0; k < N; k++)
        for (int j = 0; j < N; j++)
            if (likely(k)) /* #define likely(x) __builtin_expect(!!(x), 1) */
                C[i][j] += A[i][k] * B[k][j];
            else
                C[i][j] = A[i][k] * B[k][j];

This is even better than Drepper's method mentioned in an earlier comment, as it works optimally regardless of the cache properties of the underlying CPU. The trick lies in reordering the loops so that all three matrices are accessed in row-major order.

0

If you are working on small numbers, then the improvement you are mentioning is negligible. Also, performance will vary depend on the Hardware on which you are running. But if you are working on numbers in millions, then it will effect. Coming to the program, can you paste the program you have written.

Roopesh Majeti
  • 556
  • 1
  • 11
  • 23
  • What I'm doing is starting from a 1x1 matrix until 1000X1000, the bigest one is multiplied 1 time, and the smaller one a number proportional to the maximum (MxMxM / NxNxN, being M the bigest one). So i.e a 1x1 matrix will be multiplied 1250000 times, 2x2 578703 and so on. Then I divided the total time for the total iterations for that dimension – Peter Dec 15 '09 at 14:13
0

Very old question, but heres my current implementation for my opengl projects:

typedef float matN[N][N];

inline void matN_mul(matN dest, matN src1, matN src2)
{
    unsigned int i;
    for(i = 0; i < N^2; i++)
    {
        unsigned int row = (int) i / 4, col = i % 4;
        dest[row][col] = src1[row][0] * src2[0][col] +
                         src1[row][1] * src2[1][col] +
                         ....
                         src[row][N-1] * src3[N-1][col];
    }
}

Where N is replaced with the size of the matrix. So if you are multiplying 4x4 matrices, then you use:

typedef float mat4[4][4];    

inline void mat4_mul(mat4 dest, mat4 src1, mat4 src2)
{
    unsigned int i;
    for(i = 0; i < 16; i++)
    {
        unsigned int row = (int) i / 4, col = i % 4;
        dest[row][col] = src1[row][0] * src2[0][col] +
                         src1[row][1] * src2[1][col] +
                         src1[row][2] * src2[2][col] +
                         src1[row][3] * src2[3][col];
    }
}

This function mainly minimizes loops but the modulus might be taxing... On my computer this function performed roughly 50% faster than a triple for loop multiplication function.

Cons:

  • Lots of code needed (ex. different functions for mat3 x mat3, mat5 x mat5...)

  • Tweaks needed for irregular multiplication (ex. mat3 x mat4).....

Jas
  • 850
  • 7
  • 21
  • 1
    I think your generic pseudocode is not quite right. Should probably be `i / N` and `i % N`. Division by powers of two can be optimized with bit shifting (`i >> 2`). Modulus of powers of two can be optimized with bitwise and (`i & 0x03`). Another option could be to just unroll the loop for the entire operation for small matrices (4x4 and smaller). – CubicleSoft Dec 09 '22 at 16:22
0

This is a very old question but I recently wandered down the rabbit hole and developed 9 different matrix multiplication implementations for both contiguous memory and non-contiguous memory (about 18 different functions). The results are interesting:

https://github.com/cubiclesoft/matrix-multiply

Blocking (aka loop tiling) didn't always produce the best results. In fact, I found that blocking may produce worse results than other algorithms depending on matrix size. And blocking really only started doing marginally better than other algorithms somewhere around 1200x1200 and performed worse at around 2000x2000 but got better again past that point. This seems to be a common problem with blocking - certain matrix sizes just don't work well. Also, blocking on contiguous memory performed slightly worse than the non-contiguous version. Contrary to common thinking, non-contiguous memory storage also generally outperformed contiguous memory storage. Blocking on contiguous memory also performed worse than an optimized straight pointer math version. I'm sure someone will point out areas of optimization that I missed/overlooked but the general conclusion is that blocking/loop tiling may: Do slightly better, do slightly worse (smaller matrices), or it may do much worse. Blocking adds a lot of complexity to the code for largely inconsequential gains and a non-smooth/wacky performance curve that's all over the place.

In my opinion, while it isn't the fastest implementation of the nine options I developed and tested, Implementation 6 has the best balance between code length, code readability, and performance:

void MatrixMultiply_NonContiguous_6(double **C, double **A, double **B, size_t A_rows, size_t A_cols, size_t B_cols)
{
    double tmpa;

    for (size_t i = 0; i < A_rows; i++)
    {
        tmpa = A[i][0];

        for (size_t j = 0; j < B_cols; j++)
        {
            C[i][j] = tmpa * B[0][j];
        }

        for (size_t k = 1; k < A_cols; k++)
        {
            tmpa = A[i][k];

            for (size_t j = 0; j < B_cols; j++)
            {
                C[i][j] += tmpa * B[k][j];
            }
        }
    }
}

void MatrixMultiply_Contiguous_6(double *C, double *A, double *B, size_t A_rows, size_t A_cols, size_t B_cols)
{
    double tmpa;

    for (size_t i = 0; i < A_rows; i++)
    {
        tmpa = A[i * A_cols];

        for (size_t j = 0; j < B_cols; j++)
        {
            C[i * B_cols + j] = tmpa * B[j];
        }

        for (size_t k = 1; k < A_cols; k++)
        {
            tmpa = A[i * A_cols + k];

            for (size_t j = 0; j < B_cols; j++)
            {
                C[i * B_cols + j] += tmpa * B[k * B_cols + j];
            }
        }
    }
}

Simply swapping j and k (Implementation 3) does a lot all on its own but little adjustments to use a temporary var for A and removing the if conditional notably improves performance over Implementation 3.

Here are the implementations (copied verbatim from the linked repository):

  • Implementation 1 - The classic naive implementation. Also the slowest. Good for showing the baseline worst case and validating the other implementations. Not so great for actual, real world usage.
  • Implementation 2 - Uses a temporary variable for matrix C which might end up using a CPU register to do the addition.
  • Implementation 3 - Swaps the j and k loops from Implementation 1. The result is a bit more CPU cache friendly but adds a comparison per loop and the temporary from Implementation 2 is lost.
  • Implementation 4 - The temporary variable makes a comeback but this time on one of the operands (matrix A) instead of the assignment.
  • Implementation 5 - Move the conditional outside the innermost for loop. Now we have two inner for-loops.
  • Implementation 6 - Remove conditional altogether. This implementation arguably offers the best balance between code length, code readability, and performance. That is, both contiguous and non-contiguous functions are short, easy to understand, and faster than the earlier implementations. It is good enough that the next three Implementations use it as their starting point.
  • Implementation 7 - Precalculate base row start for the contiguous memory implementation. The non-contiguous version remains identical to Implementation 6. However, the performance gains are negligible over Implementation 6.
  • Implementation 8 - Sacrifice a LOT of code readability to use simple pointer math. The result completely removes all array access multiplication. Variant of Implementation 6. The contiguous version performs better than Implementation 9. The non-contiguous version performs about the same as Implementation 6.
  • Implementation 9 - Return to the readable code of Implementation 6 to implement a blocking algorithm. Processing small blocks at a time allows larger arrays to stay in the CPU cache during inner loop processing for a small increase in performance at around 1200x1200 array sizes but also results in a wacky performance graph that shows it can actually perform far worse than other Implementations.
CubicleSoft
  • 2,274
  • 24
  • 20