1

So i was trying to experimentally determine the (average) time slice on my Linux 4.4.0-98-generic OS, given the hint from here. The following is my code using C++14 STL.

#include <chrono>
#include <iostream>
#include <unistd.h>

int main(int argc, char** argv) {
  std::chrono::time_point<std::chrono::high_resolution_clock> clock_t1 = std::chrono::high_resolution_clock::now();
  std::chrono::time_point<std::chrono::high_resolution_clock> clock_t2 = std::chrono::high_resolution_clock::now();
  std::chrono::duration<long long, std::nano> diff = clock_t2 - clock_t1;

  std::chrono::duration<long long, std::nano> diff_precision = std::chrono::nanoseconds(1LL);

  for(int i=0; i<5; i++) {
    if(0 == fork()) {
      break;
    }
  }

  int num_tries = 0;
  while(num_tries < 10) {
   while((diff = clock_t2 - clock_t1) <= diff_precision) {
     clock_t1 = clock_t2;
     clock_t2 = std::chrono::high_resolution_clock::now();
   }
   std::cout << diff.count() << std::endl;
   clock_t1 = std::chrono::high_resolution_clock::now();
   clock_t2 = std::chrono::high_resolution_clock::now();
   num_tries++;
  }

  return 0;
}

The problem is that i always get the same numbers for all processes. e.g. output:

480
480
480
480
480
480

Also, it seemed strange that the time slice is about half a micro second. So i tried different values for diff_precision right from 1ns to 100ms. And for different precisions, it gave an output in that range; e.g: for 1ms precision, it would output a bunch of 5531380. Strange indeed.

Is this even a reliable method to find out the OS time slice? Or should i rather just trust the values from the source code?

P.S: “A question with that title already exists; please be more specific.” Oh c'mon SO.

garyF
  • 521
  • 3
  • 15
  • If you're speaking about _time slices_,_CFS_ scheduler doesn't operate with them, so time task spends on CPU depends on competing workload. – myaut Nov 07 '17 at 16:46
  • @myaut Yes, i mean time slice. But if the default kernel has a `sysctl_sched_min_granularity`, shouldn't i get that value here? – garyF Nov 07 '17 at 16:51
  • Well, it is _minimal_ value. I suggest you reading chapter 4 from _Linux Kernel Development (3 ed.)_. It explains CFS very well. – myaut Nov 07 '17 at 17:09
  • @myaut Thanks for the book. But it deepened my confusion even further. The minimal value in LKD is 1ms (in `fair.c`, it is 0.75ms). Anyway, your suggestion helped in some other way. It looks like the values around `5531380ns` are what i expect. The operators `-` or `<=` might be taking more than a couple of micro seconds to execute at some experiments. So i ran the code on another faster machine, and there i managed to narrow down the `diff_precision` to 1 micro second. From above that to 1 ms, it gives the values i expect. P.S. (anyone): can i expect that these operations take variable time? – garyF Nov 07 '17 at 17:50
  • So i think my question is now resolved. The issue was that i needed to choose a difference granularity that ignored the execution time of getting the system clock value itself. Or possibly run it on a faster machine to allow for smaller granularity size. Thanks for your help @myaut. – garyF Nov 08 '17 at 18:42

0 Answers0