9

I need to create a C++ function that will return the number of seconds until the next Vsync interval as a floating point value.

Why?

I am creating programs that display rectangles that follow the mouse cursor. Ostensibly OpenGL provides a vsync mechanism in the glXSwapBuffers function, but I have found this to be unreliable. With some card drivers you get vsync; with others you don't. On some you get vsync but you also get an extra 2-frames of latency.

But this is not a bug in OpenGL. The spec is intentionally vague: "The contents of the back buffer then become undefined. The update typically takes place during the vertical retrace of the monitor, rather than immediately after glXSwapBuffers is called." The key word being "typically"... basically glXSwapBuffers doesn't promise squat w.r.t. vsync. Go figure.

In my current attempt to solve this basic problem, I currently guess an initial vsync time and then afterwords assume the phase equals elapsed time MOD 1/(59.85Hz) which seems to sync up with my current monitor. But this doesn't work so well because I don't actually know the initial phase. So I get one tear. At least it doesn't move around. But what I really need is to just measure the current vsync phase somehow.

No, I don't want to rely on some OpenGL call to do a vsync for me. Because of the vagueness in the spec, this gives the OpenGL implementation to add as much latency as it pleases.

No, I don't want to rely on some SGI extension or some other thing that has to be installed to make it work. This is graphics 101. Vsync. Just need a way to query its state. SOME builtin, always-installed API must have this.

Maybe I can create a secondary thread that waits for Vsync somehow, and records the time when this happens? But be aware that the following sequence:

#include <sys/ioctl.h>
#include <fcntl.h>
#include <linux/types.h>
#include <linux/ioctl.h>
#include <linux/fb.h>
#include <errno.h>
#include <string.h>
#include <stdio.h>

int main()
{
  int fb = open("/dev/fb0", O_RDWR);
  assert(fb != -1);
  int zero = 0;
  if (ioctl(fb, FBIO_WAITFORVSYNC, &zero) == -1)
    printf("fb ioctl failed: %s\n", strerror(errno));
}

does NOT work in Debian. Result:

% ./a.out
fb ioctl failed: Inappropriate ioctl for device
% ls -l /dev/fb0
crw-rw-rw- 1 root video 29, 0 Sep  1 20:52 /dev/fb0

There must be some way to just read the phase from a device, or some other OpenGL call. OpenGL is THE THING for graphics. Vsync is graphics 101.

Please help.

personal_cloud
  • 3,943
  • 3
  • 28
  • 38

4 Answers4

4

When you search for FBIO_WAITFORVSYNC in the Linux kernel sources, you can see, that it is implemented only for a few graphics cards, but not for all.

So, if you happen to have one of the many other cards, you get "Inappropriate ioctl for device", which just means not implemented for this graphics card driver.

Maybe How to wait for VSYNC in Xlib app? gives you some hint into the right direction.

Community
  • 1
  • 1
Olaf Dietsche
  • 72,253
  • 8
  • 102
  • 198
3

Outline of a solution that is better than giving up:

  1. Search on digi-key for a MAX chip that outputs the sync signal.

  2. Install RS232 card.

  3. Connect the sync signal to a handshake line on the RS232.

  4. Use standard termios API that will work on any Linux.

  5. Encase amazing product in ceramic epoxy block and sell for $500.

personal_cloud
  • 3,943
  • 3
  • 28
  • 38
0

This is graphics 101. Vsync. Just need a way to query its state. SOME builtin, always-installed API must have this.

No, there "must"n't be a way to do that. At least, not anything that gets exposed to you. And certainly not anything cross-platform.

After all, you do not own the screen. The system owns the screen; you are only renting some portion of it, and thus are at the mercy of the system. The system deals with vsync; your job is to fill in the image(s) that get displayed there.

Consider Vulkan, which is about as low level as you're going to get these days without actually being the graphics driver. Its WSI interface is explicitly designed to avoid allowing you to do things like "wait until the next vsync".

Its presentation system does offer a variety of modes, but the only one that implementations are required to support is FIFO: strict vsync, but no tearing. Of course, Vulkan's WSI does at least allow you to choose how much image buffering you want. But if you use FIFO with only a double-buffer, and you're late at providing that image, then your swap isn't going to be visible until the next vsync.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • 4
    The system owns the screen, but VSYNC affects my window. So I ought to be able to access it. – personal_cloud Aug 08 '17 at 19:36
  • 1
    Nicol is saying that if you stack up N buffered frames, you let the system worry about when to pull the frames out of the queue and apply them to the screen. Just wanted to make that clear. – Scott Franco May 03 '21 at 23:23
0

A short answer is: vsync used to be popular on computers when video buffering was expensive. Nowadays, with general use of double buffer animation, it is less important. I used to get access to vsync from the graphics card on a IBM-PC before Windowing systems, and would not mind getting VSYNC even now. With double buffering, you still have the risk that your raster scan can occur while bltting the buffer to the video memory, so it would be nice to sync that. However, with double buffering you are going to eliminate a lot of the "sparkle" effects and other artifacts from direct video drawing, because you are doing a linear blt instead of individual pixel manipulation.

Its also possible (as the previous poster implied) that both the fact that both of your buffers exist in video memory, and the idea that the display manager can carefully manage the blts to the screen (compositing) can render effects nonexistent.

How do I handle this now? I keep a frame timer, of say 30 times per second, that I use to flip the buffers. It is not particularly synchronized to the actual frame time on the graphics card.

Scott Franco
  • 481
  • 2
  • 15
  • 1
    Sure, the extra buffer itself is cheap. But then you're adding a more complex API. As noted in the question, I found glXSwapBuffers not to behave consistently enough (across system configurations) to be able to rely on it. And double buffering also adds up to 1 frame of latency. – personal_cloud May 05 '21 at 20:28