Our tool generates performance logs in diagnostic mode however we track the performance as in code execution time (Stopwatch + miliseconds).
Obviously it's not reliable at all, the testing system's CPU can be used by some random process, results will be totally different if you the tool configured to run 10 threads rather than 2, etc.
My question is:
What's the correct way to find out correct CPU time for a piece of code (not for the whole process)?
What I mean by CPU Time:
Basically how much cycle CPU spent. I assume this will be always a same for the same piece of code in the same computer and not effected by other processes. There might be some fundamental stuff I'm missing in here, if so please enlighten me in the comments or answers.
P.S. Using a profiler is not possible in our setup
Another update,
Why I'm not going to use profiler
Because we need to test the code in different environments with different data where we don't have a profiler or a IDE or something like that. Hence code itself should handle it. An extreme option can be using a profiler's DLL maybe but I don't think this task requires such a complex solution (assuming there is no free and easy to implement profiling library out there).