3

Is it possible in linux to change the rate at which it ticks for finer time resolution?

The default resolution is set to 1000hz in my system, meaning that I have a minimum latency of 1ms, I need a much more percise system with latency at 1us or even 1ns, but I don't know if it is even possible to change this beyond 1000hz.

if it's possible to change, where/how can this be accomplished?

Is there possible any other workaround considering im writing a c script?

m.s.
  • 16,063
  • 7
  • 53
  • 88
user3828657
  • 67
  • 1
  • 7
  • If you are just looking for higher timer resolution such as nano-second, you can use [`clock_gettime()`](http://man7.org/linux/man-pages/man2/clock_gettime.2.html) instead of mess with jiffies. More detail can see this SO [answer](http://stackoverflow.com/a/6749766/1865106). – SSC Oct 16 '15 at 08:58
  • Do you need a finer resolution of the scheduler for your system to be more reactive or do you need more resolution for time measurements? – Jens Gustedt Oct 16 '15 at 09:35
  • It's for better reaction time, to be able to sleep x us and/or execute every x us – user3828657 Oct 16 '15 at 10:32
  • you'll probably raise into a too loaded system. In that case you won't get the desired effect. Why not use the high resolution clock given by `clock_gettime(2)` syscall? It uses the cpu tick counter that gives you more resolution than the `0.001s` given by the clock tick. – Luis Colorado Oct 19 '15 at 05:32
  • Same here, but due to `tc-htb(8)` using jiffies… – mirabilos Oct 24 '21 at 20:05

1 Answers1

1

Of course it depends on hardware (if specific embedded) and on the kernel version, however in modern Linux sleep time is not controlled by Jiffies. Linux kernels above 2.6.24 can have high precision event timer (HPET) to trigger some events.

It is not related to application sleep. It is not needed to set CONFIG_HZ_1000 if you want to sleep 1ms or less.

In Linux you can have functions usleep() and nanosleep(). The clock resolution is already 1 ns in a desktop PC running Ubuntu. However it does not mean that it is possible to perfectly sleep for few nanoseconds in not real time systems.

Check this example (compiled with gcc clock.c -O3 -Wall -lrt -o clock):

#include <stdio.h>
#include <unistd.h>
#include <time.h>

int main()
{
    struct timespec res, tp1, tp2;

    clock_getres(CLOCK_MONOTONIC, &res);

    clock_gettime(CLOCK_MONOTONIC, &tp1);
    usleep(100);
    clock_gettime(CLOCK_MONOTONIC, &tp2);

    printf("resolution: %ld sec %ld ns\n", res.tv_sec, res.tv_nsec);
    printf("time1: %ld / %ld\n", tp1.tv_sec, tp1.tv_nsec);
    printf("time2: %ld / %ld\n", tp2.tv_sec, tp2.tv_nsec);
    printf("diff: %ld ns\n", (tp2.tv_sec - tp1.tv_sec) * 1000000000 + tp2.tv_nsec - tp1.tv_nsec);

    return 0;
}

By default it occurs that I have additional 60 usec delays above the desired sleep time. Such accuracy can be acceptable for sleep time for hundreds microseconds (CONFIG_HZ=250 in my system).

To reduce that delay the process should be run with higher priority, for example:

sudo chrt --rr 99 ./clock

In that case I can have the error less than 20 us.

Without any sleep function between subsequent calls to clock_gettime I can see delays in range 200 - 900 ns. So, it is possible to measure sleep with usec precision by busy loop:

clock_gettime(CLOCK_MONOTONIC, &tp1);
clock_gettime(CLOCK_MONOTONIC, &tp2);
while (get_diff(&tp1, &tp2) < ns_time_to_sleep)
    clock_gettime(CLOCK_MONOTONIC, &tp2);
Orest Hera
  • 6,706
  • 2
  • 21
  • 35