I've been reading a paper on real-time systems using the Linux OS, and the term "scheduling jitter" is used repeatedly without definition.
What is scheduling jitter? What does it mean?
I've been reading a paper on real-time systems using the Linux OS, and the term "scheduling jitter" is used repeatedly without definition.
What is scheduling jitter? What does it mean?
Jitter is the difference between subsequent periods of time for a given task. In a real time OS it is important to reduce jitter to an acceptable level for the application. Here is a picture of jitter.
Jitter is the irregularity of a time-based signal. For example, in networks, jitter would be the variability of the packet latency across a network. In scheduling, I'm assuming the jitter refers to inequality of slices of time allocated to processes.
Read more here http://en.wikipedia.org/wiki/Jitter
Scheduling jitter is the maximum variance in time expected for program execution period
This concept is very important in real-time simulation systems. My experience comes from over 30 years in the real-time simulation industry (mostly Flight Simulation). Ideally absolutely no jitter is desirable, and that is precisely the objective of hard real-time scheduling.
Suppose that for example a real-time simulation needs to execute a certain computer program at 400 Hz in order to produce a stable and accurate simulation of that subsystem. That means we need to expect that the system will execute the program once every 2.5 msec. To achieve that in a hard real-time system, high-resolution clocks are used to schedule that module at a high priority so that the jitter is nearly zero. If this were a soft real-time simulation, some higher amount of jitter would be expected. If the scheduling jitter was 0.1 msec, then the starting point for that program would every 2.5 msec +/- 0.1 msec (or less). That would be acceptable as long as it would never take longer than 2.3 msec to execute the program. Otherwise the program could "overrun". If that ever happens, then determinism is lost, and the simulation looses fidelity.
So, given djc's answer, scheduling jitter for my semantic domain in the question above would be:
Scheduling jitter: inequality of slices of time allocated to processes by the system scheduler that occur out of necessity. An example of where this might occur would be: If one has a requirement where all processes in a real-time environment would use no more than 100ms of processor time per scheduled time, a process that requires and uses 150ms of time would cause significant scheduling jitter in that real-time system.
Scheduling jitter in real time operating systems is not about different time slices of processes. Jitter is a variable deviation from ideal timing event. Scheduling jitter is the delay between the time when task shall be started, and the time when the task is being started. for example consider a task should be started after 10ms, but for whatever reason, it is started after 15ms. in our example the jitter is 5ms!
Simply jitter means delay in operating system concept . Scheduling jitter means difference of actual relative starting time from the nominal value.
Point of occurrence of systick to the point of execution of first instruction of the woken up periodic task