I'm using visual studio 2012 and would like to know about the accuracy of high_resolution_clock.
Basically I'm writing some code to display sound and images, but I need them to be very well synchronised, and the images must be tear free. I'm using directX to give tear free images and am timing screen refreshes using high_resolution_clock. The display claims to be 60 fps, however, timing with high_resolution_clock gives a refresh rate of 60.035 fps, averaged over 10000 screen refreshes. Depending upon which is correct my audio will end up out by 0.5 ms after a second, which is around 2 s after an hour. I would expect any clock to be more accurate than that - more like 1 s drift over a year, not an hour.
Has anyone ever looked at this kind of stuff before. Should I expect my sound card clock to be different again?
edit Here is my timing code. This while loop runs in my rendering thread. m_renderData is and array of structs containing the data needed for rendering my scene, it has one element per screen. For the tests I'm only running on one screen so it has only one elements
while(!TestDestroy())
{
for(size_t i=0; i<m_renderData.size(); ++i)
{
//work out where in the vsync cycle we are and increment the render cycle
//as needed until we need to actually render
D3DRASTER_STATUS rStatus;
m_renderData[i].deviceD3D9->GetRasterStatus(0, &rStatus);
if(m_renderData[i].renderStage==inVBlankRenderingComplete)
{
if(!rStatus.InVBlank)
m_renderData[i].renderStage=notInVBlank;
}
else if(m_renderData[i].renderStage==notInVBlank)
{
if(rStatus.InVBlank)
m_renderData[i].renderStage=inVBlankReadyToRender;
}
//check for missing the vsync for rendering
bool timeOut=false;
if(m_renderData[i].durations.size()>0)
{
double timeSinceLastRender=std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now()-m_renderData[i].durations.back()).count();
if (timeSinceLastRender>expectedUpdatePeriod*1.2)
timeOut=true;
}
if(m_renderData[i].renderStage==inVBlankReadyToRender || timeOut)
{
//We have reached the time to render
//record the time and increment the number of renders that have been performed
m_renderData[i].durations.push_back(std::chrono::high_resolution_clock::now());
++m_renderData[i].nRenders;
//we calculate the fps using 10001 times - i.e. an interval of 10000 frames
size_t fpsUpdatePeriod=10001;
if(m_renderData[i].nRenders<fpsUpdatePeriod)
{
//if we don't have enough times then display a message
m_renderData[i].fpsString = "FPS: Calculating";
}
else
{
//we have enough timing info, calculate the fps
double meanFrameTime = std::chrono::duration_cast<std::chrono::microseconds>(m_renderData[i].durations.back()-*(m_renderData[i].durations.end()-fpsUpdatePeriod)).count()/double(fpsUpdatePeriod-1);
double fps = 1000000.0/meanFrameTime;
saveFps(fps);
}
//render to the back buffer for this screen
renderToBackBuffer(i);
//display the back buffer
if(!TestDestroy())
m_renderData[i].deviceD3D9->Present(NULL, NULL, NULL, NULL);
//make sure we render to the correct back buffer next time
m_renderData[i].bufferToRender--;
//update the render cycle
m_renderData[i].renderStage=inVBlankRenderingComplete;
}
}
}