I've read this link on Measure time in Linux - getrusage vs clock_gettime vs clock vs gettimeofday? which provides a great breakdown of timing functions available in C
I'm very confused, however, to how the different notions of "time" are maintained by the OS/hardware.
This is a quote from the Linux man pages,
RTCs should not be confused with the system clock, which is a software clock maintained by the kernel and used to implement gettimeofday(2) and time(2), as well as setting timestamps on files, and so on. The system clock reports seconds and microseconds since a start point, defined to be the POSIX Epoch: 1970-01-01 00:00:00 +0000 (UTC). (One common implementation counts timer interrupts, once per "jiffy", at a frequency of 100, 250, or 1000 Hz.) That is, it is supposed to report wall clock time, which RTCs also do.
A key difference between an RTC and the system clock is that RTCs run even when the system is in a low power state (including "off"), and the system clock can't. Until it is initialized, the system clock can only report time since system boot ... not since the POSIX Epoch. So at boot time, and after resuming from a system low power state, the system clock will often be set to the current wall clock time using an RTC. Systems without an RTC need to set the system clock using another clock, maybe across the network or by entering that data manually.
The Arch Linux docs indicate that the RTC and system clock are independent after bootup. My questions then are:
- What causes the interrupts that increments the system clock???
- If wall time = time interval using the system clock, what does the process time depend on??
- Is any of this all related to the CPU frequency? Or is that a totally orthogonal time-keeping business