I've noticed that commonly if I ran:
while {
start = now()
pseudoFunction()
end = now()
}
will run N x faster than (faster means duration of end - start)
while {
start = now()
pseudoFunction()
end = now()
sleep(100ms)
}
My assumption is that C-states, CPU boost, kernel scheduling and governor causes this. So without going to RT kernel, are there any means to tell CPU or Kernel that a latency sensitive routine is coming up? Please boost the clocks or give me thread attention?
I know that C-states and governor settings can be changed easily, but are there any other ways to improve latency or speed sensitive function's performance? Like does Intel has any API to artificially boost? Or are there any syscalls that could help prioritize? Would setting thread priority increase speed?
Thanks