Take this function getting time as a double using clock_gettime:
// return current time in milliseconds
static double time_get_ms(void)
{
struct timespec res;
#ifdef ANDROID
clock_gettime(CLOCK_REALTIME_HR, &res);
#else
clock_gettime(CLOCK_REALTIME, &res);
#endif
return (double)(1000.0*res.tv_sec + res.tv_nsec/1e6);
}
Sending it to a shader requires conversion to float. The mantissa is being overflowed and is truncated on the way to the shader.
Example:
As a double = 1330579093642.441895
As a float = 1330579111936.000000
The float value gets stuck at a single number for a long period of time due to the truncation.
It also seems even the value of seconds in res.tv_sec is to large for a float, it is also being truncated on the way to the GPU.
Trying to measure time since application launch and I run into the same problem rather quickly.
So what is the best way to get a running time value into a shader? Something cross platform in the linux world (so IOS, Android, Linux).