Sometimes I need to profile an application while simultaneously needing to fire off a large number of unrelated calculations. Often I will launch off multiple jobs so that the load exceeds the number of cores so that I can just come back sometime later with all jobs having completed.
Can I still safely interpret the profile results in this situation?
I can imagine in some situations, with the job-being-profiled running proportionally less because my system is oversubscribed with jobs, that the results will not be affected because perhaps the polling of the job is also less frequently.
I could also imagine a profiler which examines the application with a set periodicity, say every 10 ms, finding that the code is taking longer to stay in a particular function, but only because the system is oversubscribed.
I'm only speculating on these things, both may be true, but I need clarification.