I'm writing an OpenGL application. At the beginning it has very large CPU consumption in SwapBuffer (profiled in Intel VTune Profiler) that made my user unhappy. I made some search and somebody says you should sleep for some time until vsync, so I make the OpenGL thread sleep like this:
// in OpenGL thread
auto render_start = std::chrono::steady_clock::now();
do_render();
auto render_end = std::chrono::steady_clock::now();
auto render_length = render_end - render_start;
auto expect_length = std::chrono::milliseconds( 1000 / 60 - 1 ); // an extra 1 ms gap to prevent oversleep
if (expect_length > render_length)
std::this_thread::sleep_for( expect_length - render_length );
However, in task manager, the CPU cost does not decrease. Detailed analysis in Intel VTune Profiler shows that sleep costs a large amount of CPU spin time, whilst total amount of CPU consumption is approximately same with the version without sleep. So is there any way to truely reduce CPU cost of SwapBuffer or sleep?