I've written two different ways of doing the same thing. I would like to compare which one will execute faster. Of course it is always possible to benchmark, but how a program benchmarks can be different from machine to machine and can be affected by many outside factors. How could I calculate which is faster without bench marking? My thought would be that you would sum the times of all the operations done in the program. Is this a standard thing to do? It seems like when you benchmark there is alot of room for error.
-
This is the study of [algorithmic complexity](https://en.wikipedia.org/wiki/Analysis_of_algorithms). – Phylogenesis Aug 11 '16 at 12:32
-
2I think this is quite hard to do for a ruby program since you need deep knowledge of how each operation is interpreted, what objects gets created and so on. I think benchmarking with many iterations is the best way to go. – Albin Aug 11 '16 at 12:32
-
Also benchmarking is the standard, right? – thesecretmaster Aug 11 '16 at 12:33
-
Algorithmic complexity and actual practical implementation may differ. Your algorithm may *theoretically* be very simple, but you can implement it so inefficiently that a more complex algorithm with a more efficient implementation may be faster. Theory is nice and all, but in the end it's what the computer does with your actual code that counts. – deceze Aug 11 '16 at 12:34
-
1Reminds me of a story of a programmer that insisted his program was mathematically proven to be correct; problem was it still crashed and was useless when actually compiled and executed… – deceze Aug 11 '16 at 12:35
-
1`My thought would be that you would sum the times of all the operations done in the program`. This is basically the definition of benchmarking. Actually any type of measurement in order to compare things. – lcguida Aug 11 '16 at 14:39
2 Answers
My thought would be that you would sum the times of all the operations done in the program.
Yes, but you can't easily / reliably figure out those times by any method other than benchmarking.
The problem is that these times depend on the dynamic context of what happened previously in your program (or even system-wide). CPUs are complex beasts, and cache effects (data cache and instruction cache) are often a major factor. So is branch prediction. Why is it faster to process a sorted array than an unsorted array?
Static analysis of a small loop in assembly language is possible. e.g. I can accurately predict how many cycles per iteration a simple loop can run on Intel Haswell, assuming no cache misses, based on Agner Fog's microarchictecture pdf and instruction tables. Going beyond that involves more and more guesswork.
Performance in a high-level interpreted language like Ruby might be somewhat predictable for experts that spend a lot of time tuning code in it, but almost certainly not "this will take this number of microseconds", only "this is probably a bit or a lot faster than that".

- 1
- 1

- 328,167
- 45
- 605
- 847
Algorithmic complexity will give you a theoretical speed comparison for an algorithm.
Your question is about an arbitrary program, but a program is more than a collection of algorithms.
The execution speed of a program depends on the context it is running ( I/Os, operating system (multitasking or not), hardware ).
So there is no other method than statistics on a bunch of measurements, which is a definition for benchmark.

- 33
- 3