2

I have an application written in C++ that someone else has written in a way that's supposed to maximally take advantage of cpu caches. This application runs on a guest Ubuntu OS that is using paravirtualization. I ran cachegrind and received very low cache miss rates.

Since my OS is virtualized, can I be sure that these values are in fact correct in showing that the cpu cache is being well used for my application?

Peter Smith
  • 849
  • 2
  • 11
  • 28
  • Keyword here is "supposed to". Programmers are very good at deceiving themselves about what's happening, from a time perspective, in a program. The first thing I would suggest is to sample the thing and see what it's really doing. [Here's an example.](http://stackoverflow.com/questions/926266/performance-optimization-strategies-of-last-resort/927773#927773) – Mike Dunlavey Jan 08 '11 at 14:23
  • +1 Good question. I am working through this right now, too. Any updates since the original question? – Iterator Oct 13 '11 at 23:58
  • @Iterator I never really found a good way to test this and my organization eventually just moved the application to its own standalone machine rather then keeping it virtualized. – Peter Smith Oct 17 '11 at 18:02
  • Thanks for the update. Did the cache hit rates on the standalone machine match what you'd seen under Xen? – Iterator Oct 17 '11 at 18:38

1 Answers1

-1

Cachegrind is a simulator. A real CPU may actually perform differently (e.g. your real CPU may have a different cache hierarchy to cachegrind, different sizes of cache, a different replacement policy and so forth). You would need to watch the real CPUs performance counters to know for sure how well your program was really performing on real hardware with respect to cache.

Anon
  • 1