1

I'm trying to make a C version of many of the things I have in MATLAB, and I timed the eigenvalue decomposition using matlab's eig vs a dsyev call, and MATLAB's is faster. For example:

10 x 10: 0.003246 seconds with eig, 0.013897 s with dsyev in C

100 x 100: 0.001516 seconds with eig, 0.001764 s with dsyev in C

1000 x 1000: 0.304438 seconds with eig, 0.356483 s with dsyev in C

I was under the impression that MATLAB just uses LAPACK calls for this low level stuff. Is there more to the picture?

Thanks!

Y. S.
  • 253
  • 2
  • 7
  • 4
    It uses the Intel implementation of LAPACK, known as intel MKL – gregswiss Nov 03 '15 at 01:42
  • Possible duplicate of [Why is MATLAB so fast in matrix multiplication?](http://stackoverflow.com/questions/6058139/why-is-matlab-so-fast-in-matrix-multiplication) – gregswiss Nov 03 '15 at 01:47
  • Hmm... interesting. Looks like a full C version won't be much faster than MATLAB then... – Y. S. Nov 03 '15 at 01:50
  • It's not a duplication of that question. I think it's clear that MATLAB/LAPACK calls are faster than straight for loop implementation; What surprised me is the difference in the LAPACK call itself, and that MATLAB would be noticably faster, when in theory they're doing exactly the same thing. – Y. S. Nov 03 '15 at 01:53
  • 1
    It's funny because I'm using the LAPACK library provided BY MATLAB for writing mex files. I was wondering really if the reason is in the choice of call (dsyev vs dsyevx vs dseyvr, etc) – Y. S. Nov 03 '15 at 01:57
  • quite possible - hard to know for sure as MATLAB is closed source – gregswiss Nov 03 '15 at 02:17
  • 2
    matlab quite probably has proprietary improvements on the eigenvalue code. Differences may also occur from the underlying BLAS library, the segmentation of matrix operations to be cache efficient can have quite an impact. – Lutz Lehmann Jan 20 '16 at 14:11
  • I am completely mystified as to why anyone would want to call LAPACK from MATLAB. – Milind R Dec 11 '18 at 18:31

0 Answers0