I'm writing a video player where my code decodes the video to raw YCbCr frames. What would be the fastest way to output these through the Qt framework? I want to avoid copying data around too much as the images are in HD format.
I am afraid that software color conversion into a QImage would be slow and that later the QImage will again be copied when drawing into the GUI.
I have had a look at QAbstractVideoSurface and even have running code, but cannot grasp how this is faster, since like in the VideoWidget example (http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/multimedia-videowidget.html), rendering is still done by calling QPainter::drawImage with QImage, which has to be in RGB.
The preferred solution seems to me to have access to a hardware surface directly into which I could decode the YCbCr or at least directly do the RGB conversion (with libswscale) into. But I cannot see how I could do this (without using OpenGL, which would give me free scaling too, though).