Ok, I've asked this a couple hours ago and I decided to make the same test using C.
Example 1:
#include <stdio.h>
#include <sys/time.h>
void main()
{
printf("Time testing, pure c\n");
struct timeval tv;
int i;
int values[1000];
useconds_t usec = 1000;
for (i = 0; i < 1000; i++)
{
gettimeofday( &tv, NULL );
values[i] = tv.tv_usec;
usleep(usec);
}
for (i = 0; i < 1000; i++)
{
printf("%d\n", values[i]);
}
}
Output:
...
110977
111977
112977
113977
114978
115978
116978
...
Ok, not bad! 1 ms sleep is ok, now let's try 100 us sleep (same code, only usec = 100):
#include <stdio.h>
#include <sys/time.h>
void main()
{
printf("Time testing, pure c\n");
struct timeval tv;
int i;
int values[1000];
useconds_t usec = 100;
for (i = 0; i < 1000; i++)
{
gettimeofday( &tv, NULL );
values[i] = tv.tv_usec;
usleep(usec);
}
for (i = 0; i < 1000; i++)
{
printf("%d\n", values[i]);
}
}
Output:
...
674680
675680
676680
677680
678680
679680
680681
681681
...
That's bad, beacause it's the same result! Only milliseconds are changed and usecs are not changing! So what's going on and where is my mistake? And can I get microseconds in windows properly?