0

There's something I fundamentally don't understand about how multitasking works in Linux (and probably also in general). If I understand correctly, each time a process wants to change its output to the screen, it needs to do some computations and send the data. But if I understand correctly, processes can hog the CPU for up to 100ms before being pre-empted under the default settings of most Linux distributions. This would seem to preclude the possibility of processes being unblocked frequently enough to be able to refresh the screen at 60Hz. In light of this, I guess there's probably a whole host of fundamental misunderstandings I have about how Linux manages its scarce CPU time and/or about how processes send data to I/O devices.

Question. What's going on here?

goblin GONE
  • 540
  • 3
  • 12
  • Why the close vote? This is a perfectly legitimate question. This site is just so weird... – goblin GONE Feb 26 '20 at 02:24
  • Something *being able to* do something doesn’t mean it *does happen*. Sure, if each core has a thread that hogs it you can’t have 60Hz updates on screen, unless one of those threads is doing it. – Sami Kuhmonen Feb 26 '20 at 02:37

2 Answers2

1

You seem to be confusing different scheduling policies.

In Linux, there are several scheduling policies that determine different time slices. The 100ms default time slice is only for the SCHED_RR policy, which is used for real-time processes. In reality, no normal process runs under SCHED_RR.

Normal processes run under SCHED_OTHER, which is the default scheduler policy. Under this schedule, the time slice is dynamically determined at runtime, and is much lower. By default, it can be anywhere between 0.75ms to 6ms. You can see these default values (in nanoseconds) defined in kernel/sched/fair.c as sysctl_sched_min_granularity and sysctl_sched_latency respectively. You can get the real values on your system by reading /proc/sys/kernel/sched_min_granularity_ns or /proc/sys/kernel/sched_latency_ns.

You can learn more about the Linux kernel CFS scheduler here (or here for more documents).

Marco Bonelli
  • 63,369
  • 21
  • 118
  • 128
0

In theory; it's possibly far worse than what you think - if there are 100 other processes and each process consumes the maximum time slice they're allowed (100 ms); then it could take 100 * 100 ms = 10 seconds before the game's process got CPU time again.

However:

a) the maximum time slice length is configurable (when compiling the kernel) and (for desktop systems) is more likely to be 10 ms

b) processes that consume the maximum time slice they're allowed are extremely rare. If a process is given a maximum of 10 ms but blocks after 1 ms (because it has to wait for disk IO or network or a mutex or ...) then the process will only use 1 ms

c) for modern computers it's extremely likely that there are multiple CPUs

d) there are other scheduling policies (see http://man7.org/linux/man-pages/man7/sched.7.html ), and task priorities ("nice"), and "cgroups". All of these can be used to ensure a special process (e.g. a game) gets CPU time before other processes and/or gets more CPU time than other processes.

e) most people playing games simply don't have other processes consuming CPU time at the same time - they might have multiple processes that aren't consuming any CPU time, but won't have multiple processes trying to consume 100% of all CPUs' time.

Brendan
  • 35,656
  • 2
  • 39
  • 66
  • Interesting. So why is it that when I bring up a Python console and type `math.factorial(10000)`, all my GUI programs like Firefox keep on refreshing at the full 60Hz while the console struggles to compute the value? Is this because the Python console is only getting 10ms at a time? – goblin GONE Feb 26 '20 at 04:51
  • @goblin: How many CPUs is Python using (all of them, or one)? Note that GUI programs typically update the screen whenever something changes without caring about timing, and don't have a "60 Hz refresh" at all. There is also a few special cases (e.g. videos being decoded/displayed by video card's hardware, where CPU isn't used). – Brendan Feb 26 '20 at 05:27
  • *"In theory; it's possibly far worse than what you think"* - it's definitely not. 100ms is the default time slice for `SCHED_RR`, on a normal system basically no process runs under `SCHED_RR`. The default policy is `SCHED_OTHER` and the timeslice for that policy is calculated at runtime and much shorter (< 10ms). – Marco Bonelli Feb 26 '20 at 13:04
  • @MarcoBonelli: In theory; waiting for many tasks to all have some CPU time can be far worse than the OP's original "waiting for only one other task to have some CPU time" assumption. The maximum length of a time slice is mostly irrelevant. – Brendan Feb 27 '20 at 05:35