0

Sometimes I need to profile an application while simultaneously needing to fire off a large number of unrelated calculations. Often I will launch off multiple jobs so that the load exceeds the number of cores so that I can just come back sometime later with all jobs having completed.

Can I still safely interpret the profile results in this situation?

I can imagine in some situations, with the job-being-profiled running proportionally less because my system is oversubscribed with jobs, that the results will not be affected because perhaps the polling of the job is also less frequently.

I could also imagine a profiler which examines the application with a set periodicity, say every 10 ms, finding that the code is taking longer to stay in a particular function, but only because the system is oversubscribed.

I'm only speculating on these things, both may be true, but I need clarification.

EMiller
  • 2,792
  • 4
  • 34
  • 55

1 Answers1

1

You're not just measuring, right? You're trying to find any hidden "diseases" that if you could cure them the code would run faster, right?

If so, any such disease takes a certain fraction of its process' time, no matter how fast or slow it is running for other reasons. It could run for one minute or one day - the fraction stays roughly the same.

So you don't need something that measures time. You need something that pinpoints the diseases that take the largest fraction of time.

This is a method that finds them.

Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135