There's a difficult challenge on that, as the timer interrupt is, by definition asynchronous, and the kernel can decide to pre-empt your process on any timer interrupt.... or not.
Probably you can access (well, I assume you have administrator rights if you plan a so challenging problem) a kernel variable registering the number of context switches it does, and so, compute the number of times your process has been scheduled-out from the cpu since some time in the past.
What is true, is that you will be scheduled when you first start the chrono, and then when you try to see what has happened.
Trying to know how many times your process has been interrupted by something between to program points is a huge task, as there are many devices doing that, and you cannot, in general, assume there's been a context switch at any one of those interrupts. Normally, the kernel maintains counters to know how many interrupts of type X have been counted and you can use the difference in counter values to know how many interrupts of type X have occurred since then. Which ones, or how they change into a process (or thread) context switch is not normally available, but by looking at the kernel source code.
But.... you have a simple means of checking an estimate. If you compare times by using the different clock_gettime
clocks available in your system, you can estimate the amount of cpu time that your process has consumed on its behalf and how much time the system has been doing otherwise. You'll get a coarse and non-precise value from the difference between clocks. E.g. FreeBSD has
CLOCK_VIRTUAL
Increments only when the CPU is running in user mode on behalf of the calling process.
CLOCK_PROF
Increments when the CPU is running in user or kernel mode.
CLOCK_PROCESS_CPUTIME_ID
Returns the execution time of the calling process.
CLOCK_THREAD_CPUTIME_ID
Returns the execution time of the calling thread.
CLOCK_REALTIME_PRECISE
Increments as a wall clock should.
So you have many clocks to select from. In Linux, the number of clocks is reduced, but anyway, you can most times get some idea of that.
NOTE
From your comment:
@ЯрославМашко I've tried it and it worked out with the answer 15ms, but is there any way to solve it by creating process to full CPU to observe time-slices switching?
You cannot do this in a user process, as for evident reasons, it will be scheduled out by the kernel, to allow other processes to do, and then you can measure nothing at all. Anything in this sense, will require to touch a lot of kernel code, and make your system very inefficient, because of the time it requires to collect all the information at every context switch. But you have the complete kernel source, and probably you can investigate on this approach. Normal tools, like top(1)
or htop(1)
do access kernel variables for counters where the kernel updates, and get some averages on what the kernel is doing... and this is because logging all the kernel activity to be processed by one single process is a recursive situation, where all the process you do has also to be accounted for, and you'll incur in an infinite recursive loop, where the system load is more severely damaged, as more fine results you want.
There are processors (most likely the ARM based) that have debug modules that allow them to spit on a dedicated line, all the activities they do at full throttle run. But the amounts of information they give normally mus be processed by a more powerful system, capable of digesting the huge amount of information these cpus are emitting. In any case, as the information is never consumed in the debugged system, this doesn't represent a problem (recursivity breaks the loop)
By the way, checking that you got 15ms on another comment you post seems to be very strange. You got rescheduled in 15ms, but to make that measurement you had to do two more system calls (three in total, two system calls to ask for clock time, and one to sleep for a minimum time) This means nothing, as the kernel normally allows a process to run for more time normally, if it has to do. Making a context switch implies switching all its virtual address space, which means invalidating a lot of memory cache information, and slows down the cpu a lot, so normally the kernel allows more than that to a process slice, in case it needs it. If you were able to switch back to you again in so low time, probably it is because your process was the only one eligible to run, and in that case, the most probable thing is that the kernel has not scheduled anything in the time between. There is a maximum time the kernel allows a process to run, so the system doesn't get locked by one cpu time consuming process. But it is also normal that no process fully consumes it's time slot... so, to make any test in this sense, you need several processes running at full cpu to see how the kernel is giving the cpu to each.
In a normal 64bit intel cpu, the kernel on FreeBSD runs at 1000ticks/second, that means that, normally, the kernel decides at 1/1000sec intervals, if it switches cpu to one process or another. But many times it decides not to switch the process, but to continue running the in-cpu process more. In Linux, you can make it tickless, what means that the timer doesn't interrupt the cpu at a fixed rate, but only when something put a timer to be awaken on.