In Peter Lawrey's Vanilla Java blog, he demonstrates the jitter caused by busy-waiting threads on isolated vs non-isolated CPU cores in an experiment measuring the distance between consecutive calls to nanoTime
. I'm trying to better understand why there's such a large spike at the 100μs range happening almost 30 times per second. He said that it has to do with the minimum time unit being 100μs and the scheduler sleeping and then waking the thread at the next time unit. However, I don't quite understand how this is causing the 100μs delay—does this mean that the minimum time quanta for the CFS is likely set to 100μs, the experiment is pre-empted and another process/thread is running for those 100μs at which point this timer experiment is then context-switched to run again?
Having a time quantum of 100μs seems extremely short, however, given that the Linux default from my understanding is set to about 40 times that. But it also seems much too long to spend 100μs running the scheduler kernel code if there's no context switching (but I could be mistaken). So I'm curious as to why this might be happening so often at 100μs spikes and is consistently reproducible in his tests.