So i was trying to experimentally determine the (average) time slice on my Linux 4.4.0-98-generic OS, given the hint from here. The following is my code using C++14 STL.
#include <chrono>
#include <iostream>
#include <unistd.h>
int main(int argc, char** argv) {
std::chrono::time_point<std::chrono::high_resolution_clock> clock_t1 = std::chrono::high_resolution_clock::now();
std::chrono::time_point<std::chrono::high_resolution_clock> clock_t2 = std::chrono::high_resolution_clock::now();
std::chrono::duration<long long, std::nano> diff = clock_t2 - clock_t1;
std::chrono::duration<long long, std::nano> diff_precision = std::chrono::nanoseconds(1LL);
for(int i=0; i<5; i++) {
if(0 == fork()) {
break;
}
}
int num_tries = 0;
while(num_tries < 10) {
while((diff = clock_t2 - clock_t1) <= diff_precision) {
clock_t1 = clock_t2;
clock_t2 = std::chrono::high_resolution_clock::now();
}
std::cout << diff.count() << std::endl;
clock_t1 = std::chrono::high_resolution_clock::now();
clock_t2 = std::chrono::high_resolution_clock::now();
num_tries++;
}
return 0;
}
The problem is that i always get the same numbers for all processes. e.g. output:
480
480
480
480
480
480
Also, it seemed strange that the time slice is about half a micro second. So i tried different values for diff_precision
right from 1ns to 100ms. And for different precisions, it gave an output in that range; e.g: for 1ms precision, it would output a bunch of 5531380
. Strange indeed.
Is this even a reliable method to find out the OS time slice? Or should i rather just trust the values from the source code?
P.S: “A question with that title already exists; please be more specific.” Oh c'mon SO.