2

Is it possible to set up a virtualized environment---be it a Docker container or a qemu VM---to run benchmarks that would not be much affected by the performance of the virtualization host?

For example, that my computation benchmark would always clock ~60 seconds, probably in CPU ticks, regardless of the actual hardware, that I/O speeds will be the same even if I upgrade the host to SSD drive and so on.

From what I've found up until now, I'd say that the above is not possible. Therefore, how can I get as close as possible to the ideal, so that my benchmark done inside a virtualized environment is reproducible even for people who do not have the same hardware I do?

user7610
  • 25,267
  • 15
  • 124
  • 150

1 Answers1

1

One approach I heard about later is Virtual Time Execution.

The idea is to execute the code in a special environment which is able to collect detailed log of execution events, which can be then recalculated into actual execution time on a given hardware and OS. The accuracy was reported to be within 5-10%.

I saw this thesis about it Software Performance Engineering using Virtual Time Program Execution.

user7610
  • 25,267
  • 15
  • 124
  • 150
  • It's probably easy to construct special cases that defeat this estimate, e.g. loops that run much slower on some CPU because of some microarchitectural quirk that leads to stalls when it would otherwise be going full out. (e.g. [32-byte aligned routine does not fit the uops cache](https://stackoverflow.com/a/61016915) JCC erratum mitigation introduced new performance potholes on some Intel CPUs.) Still, interesting idea that might be better than nothing for whole applications, especially that aren't "high performance computing" and are more often I/O bound. – Peter Cordes Jan 09 '21 at 08:50
  • 1
    That thesis mentions Java so this might actually be real-world usable for Java programs, IDK I only read the abstract. – Peter Cordes Jan 09 '21 at 08:55