17

Searching for an idea how to avoid using loop in my Matlab code, I found following comments under one question on SE:

The statement "for loops are slow in Matlab" is no longer generally true since Matlab...euhm, R2008a?

and

Have you tried to benchmark a for loop vs what you already have? sometimes it is faster than vectorized code...

So I would like to ask, is there commonly used way to test the speed of a process in Matlab? Can user see somewhere how much time the process takes or the only way is to extend the processes for several minutes in order to compare the times between each other?

Community
  • 1
  • 1
MasterPJ
  • 389
  • 7
  • 18

3 Answers3

23

The best tool for testing the performance of MATLAB code is Steve Eddins' timeit function, available here from the MATLAB Central File Exchange.

It handles many subtle issues related to benchmarking MATLAB code for you, such as:

  • ensuring that JIT compilation is used by wrapping the benchmarked code in a function
  • warming up the code
  • running the code several times and averaging

Update: As of release R2013b, timeit is part of core MATLAB.


Update: As of release R2016a, MATLAB also includes a performance testing framework that handles the above issues for you in a similar way to timeit.

Sam Roberts
  • 23,951
  • 1
  • 40
  • 64
  • 2
    +1 for a method I did not know about yet and that seems to be well thought out. – Thilo Dec 20 '12 at 15:46
  • 1
    The word `"averaging"` is at the best misleading when timeit uses median, also there looks to be no way to affect the median and no way to change the number of running times. Do you know whether it is possible to get the standard deviation in running times somehow? It would be useful to do measures until SD under some level. – hhh Dec 13 '13 at 10:43
  • @hhh I'm not sure what's misleading about that word - the median is an average. Anyway - it's quite easy to modify the functionality of `timeit` if you like (I'm referring here to the File Exchange version rather than the newer, built-in version, but I would expect they are similar). Just type `edit timeit` to see the internals of the function, and you'll see that it exercises the function several times and then calls `median` on them. Just modify `timeit` to output the individual times as well, and you can take whatever form of average you like, or standard deviations as well. – Sam Roberts Dec 13 '13 at 16:27
13

You can use the profiler to assess how much time your functions, and the blocks of code within them, are taking.

>> profile on; % Starts the profiler
>> myfunctiontorun( ); % This can be a function, script or block of code
>> profile viewer; % Opens the viewer showing you how much time everything took

Viewer also clears the current profile data for next time.

Bear in mind, profile does tend to slow execution a bit, but I believe it does so in a uniform way across everything.

Obviously if your function is very quick, you might find you don't get reliable results so if you can run it many times or extend the computation that would improve matters.

If it's really simple stuff you're testing, you can also just time it using tic and toc:

>> tic; % Start the timer
>> myfunctionname( );
>> toc; % End the timer and display elapsed time

Also if you want multiple timers, you can assign them to variables:

>> mytimer = tic;
>> myfunctionname( );
>> toc(mytimer);

Finally, if you want to store the elapsed time instead of display it:

>> myresult = toc;
n00dle
  • 5,949
  • 2
  • 35
  • 48
  • 4
    The profiler does not slow down execution in a way that is necessarily uniform. It disables the JIT compiler, which can have very different effects on different pieces of code. – Sam Roberts Dec 20 '12 at 15:21
5

I think that I am right to state that many of us time Matlab by wrapping the block of code we're interested in between tic and toc. Furthermore, we take care to ensure that the total time is of the order of 10s of seconds (rather than 1s of seconds or 100s of seconds) and repeat it 3 - 5 times and take some measure of central tendency (such as the mean) and draw our conclusions from that.

If the piece of code takes less than, say 10s, then repeat it as many times as necessary to bring it into the range, being careful to avoid any impact of one iteration on the next. And if the code naturally takes 100s of seconds or longer, either spend longer on the testing or try it with artificially small input data to run more quickly.

In my experience it's not necessary to run programs for minutes to get data on average run time with acceptably low variance. If I run a program 5 times and one (or two) of the results is wildly different from the mean I'll re-run it.

Of course, if the code has any features which make its run time non-deterministic then it's a different matter.

Eitan T
  • 32,660
  • 14
  • 72
  • 109
High Performance Mark
  • 77,191
  • 7
  • 105
  • 161