This is part of the indeterminism
of operating systems. You could run your program under (seemingly) same conditions, and it could operate faster or slower depending on:
- How many runnable processes are there?
- How are the runnable processes behaving? (high IO? high CPU?)
Your process's priority can even change during execution. For example, if your process is using its entire time quantum, it will probably be moved to a lower level in the scheduling queue, meaning your program will run less often (the operating system is predicting high usage in the future).
There is no way to guarantee your process will be running on the CPU at a certain time.
My answer to your second question: instead of considering the start and end time of the method, why wouldn't you just calculate the difference between the start and the end?
int start = clock();
methodCall();
int elapsedTime = clock() - start;
Just like I said before, this will not give you the same result every time you run it. But, this should give you an approximation of how long your method is taking to complete.
One last note: when building code for a platform, whether it be Windows, Mac OSX, or Linux, you should not worry about how long your time quantum is. You're process can't detect when it has been taken off of the CPU (or at least not until it gets put back on).
The process is an abstraction provided by the operating system which allows us to not worry about the intricate details of how our process is being managed.