3

Using the same approach as of Strassen's only 5 multiplications are sufficient to compute square of a matrix. If A[2][2] = [a, b, c, d], the multiplications are a * a, d * d, b * (a + d), c * (a + d), b * c.

If we generalise this algorithm for getting the square of a matrix, the complexity reduces to n^log5 with base 2.

I was asked a question to find what is wrong with this algorithm and when it fails if we generalise this algorithm to find the square of matrix?

Adam Stelmaszczyk
  • 19,665
  • 4
  • 70
  • 110
77H3jjuu
  • 346
  • 3
  • 10
  • 3
    Perhaps the point was that the formula you give, while correct for numbers, would be incorrect if a,b,c,d were themselves matrices, because a*b+b*d may equal b*(a+d) = b*a + b*d. So while the method is correct for 2x2 matrices you can't generalise it to 4x4 matrices by regarding them as a 2x2 matrix of 2x2 matrices. – dmuir Dec 18 '14 at 10:40
  • 4
    In other words, this requires that the multiplication is commutative. Matrix multiplication is clearly not commutative, so this algorithm cannot be applied recursively. Floating point operations also introduce numerical stability issues (i.e. the identity that the factoring uses does not truly hold under floating point). These two drawbacks would limit this method to specialty cases that don't care about them. – Adam Dec 18 '14 at 11:03

3 Answers3

4

You can get by with only 5 multiplications at the root of the call tree, but some of these multiplications are not squares, so the running time is no better than Strassen for multiplication.

Put another way, if we had an O(n^c) algorithm for squaring an n-by-n matrix, then we'd get an O(n^c) algorithm for multiplication by squaring the 2n-by-2n block matrix

     2
[0 A]    [AB 0 ]
[B 0]  = [0  BA].
David Eisenstat
  • 64,237
  • 7
  • 60
  • 120
4

This algorithm cannot work as matrix multiplication is not commutative.

ab+bd != b(a+d) because b(a+d) = ba+bd and for matrix multiplication ab!=ba. so we cannot reduce any multiplication. this algorithm only works for 2X2 matrices.

77H3jjuu
  • 346
  • 3
  • 10
  • This is true only when the [commutator](https://en.wikipedia.org/wiki/Commutator) [a, b] of a and b is nonzero, which, surely, is true of _most_ matrices. However, there's special cases where a and b _do_ commute (ie. their commutator is zero), and so we can apply the speedup. – étale-cohomology Jul 24 '16 at 11:20
3

Having a matrix A:

ab
cd

We can compute AA in the naive way with 8 multiplications:

aa + bc
ab + bd 
ac + cd 
bc + dd

Directly applying Strassen's multiplication would give us 7 multiplications.

But, using similar approach as Strassen, we can notice that:

ab + bd = b(a + d) 
ac + cd = c(a + d)

So, indeed, we can make only 5 multiplications to obtain the result:
aa, dd, bc, b(a + d), c(a + d).

There is nothing wrong with this method, i.e. it's correct for all of the inputs.

Maybe your interviewer wanted that you present your thinking and defend that actually it's not wrong, instead of agreeing upon that it's wrong.

If your interviewer would still say that it's wrong, a good idea would be to ask what's the definition of "wrong". Maybe "not optimal" (in terms of number of squarings, multiplications and additions). Good read.

Or maybe it's "wrong", because it doesn't scale up, for example it won't work for 4x4 matrices.

Adam Stelmaszczyk
  • 19,665
  • 4
  • 70
  • 110