4

I want to save an image of a frame from a QMediaPlayer. After reading the documentation, I understood that I should use QVideoProbe. I am using the following code :

QMediaPlayer *player = new QMediaPlayer();
QVideoProbe *probe   = new QVideoProbe;

connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame)));

qDebug()<<probe->setSource(player); // Returns true, hopefully.

player->setVideoOutput(myVideoSurface);
player->setMedia(QUrl::fromLocalFile("observation.mp4"));
player->play(); // Start receving frames as they get presented to myVideoSurface

But unfortunately, probe->setSource(player) always returns false for me, and thus my slot processFrame is not triggered.

What am I doing wrong ? Does anybody have a working example of QVideoProbe ?

IAmInPLS
  • 4,051
  • 4
  • 24
  • 57
user3627553
  • 61
  • 1
  • 6
  • I had the same problem and I managed to find a workaround, I will post an answer soon (it is a bit long though) – IAmInPLS Jun 09 '16 at 12:09
  • OK, thank you very much, I am awating your replay And in my prev post about QMediaPlayer [link] (http://stackoverflow.com/questions/37680515/qmediaplayer-duration-error), I have ansered to you, please take a look – user3627553 Jun 09 '16 at 12:53

3 Answers3

15

You're not doing anything wrong. As @DYangu pointed out, your media object instance does not support monitoring video. I had the same problem (and same for QAudioProbe but it doesn't interest us here). I found a solution by looking at this answer and this one.

The main idea is to subclass QAbstractVideoSurface. Once you've done that, it will call the method QAbstractVideoSurface::present(const QVideoFrame & frame) of your implementation of QAbstractVideoSurface and you will be able to process the frames of your video.

As it is said here, usually you will just need to reimplement two methods :

  1. supportedPixelFormats so that the producer can select an appropriate format for the QVideoFrame
  2. present which allows to display the frame

But at the time, I searched in the Qt source code and happily found this piece of code which helped me to do a full implementation. So, here is the full code for using a "video frame grabber".

VideoFrameGrabber.cpp :

#include "VideoFrameGrabber.h"

#include <QtWidgets>
#include <qabstractvideosurface.h>
#include <qvideosurfaceformat.h>

VideoFrameGrabber::VideoFrameGrabber(QWidget *widget, QObject *parent)
    : QAbstractVideoSurface(parent)
    , widget(widget)
    , imageFormat(QImage::Format_Invalid)
{
}

QList<QVideoFrame::PixelFormat> VideoFrameGrabber::supportedPixelFormats(QAbstractVideoBuffer::HandleType handleType) const
{
    Q_UNUSED(handleType);
    return QList<QVideoFrame::PixelFormat>()
        << QVideoFrame::Format_ARGB32
        << QVideoFrame::Format_ARGB32_Premultiplied
        << QVideoFrame::Format_RGB32
        << QVideoFrame::Format_RGB24
        << QVideoFrame::Format_RGB565
        << QVideoFrame::Format_RGB555
        << QVideoFrame::Format_ARGB8565_Premultiplied
        << QVideoFrame::Format_BGRA32
        << QVideoFrame::Format_BGRA32_Premultiplied
        << QVideoFrame::Format_BGR32
        << QVideoFrame::Format_BGR24
        << QVideoFrame::Format_BGR565
        << QVideoFrame::Format_BGR555
        << QVideoFrame::Format_BGRA5658_Premultiplied
        << QVideoFrame::Format_AYUV444
        << QVideoFrame::Format_AYUV444_Premultiplied
        << QVideoFrame::Format_YUV444
        << QVideoFrame::Format_YUV420P
        << QVideoFrame::Format_YV12
        << QVideoFrame::Format_UYVY
        << QVideoFrame::Format_YUYV
        << QVideoFrame::Format_NV12
        << QVideoFrame::Format_NV21
        << QVideoFrame::Format_IMC1
        << QVideoFrame::Format_IMC2
        << QVideoFrame::Format_IMC3
        << QVideoFrame::Format_IMC4
        << QVideoFrame::Format_Y8
        << QVideoFrame::Format_Y16
        << QVideoFrame::Format_Jpeg
        << QVideoFrame::Format_CameraRaw
        << QVideoFrame::Format_AdobeDng;
}

bool VideoFrameGrabber::isFormatSupported(const QVideoSurfaceFormat &format) const
{
    const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat());
    const QSize size = format.frameSize();

    return imageFormat != QImage::Format_Invalid
            && !size.isEmpty()
            && format.handleType() == QAbstractVideoBuffer::NoHandle;
}

bool VideoFrameGrabber::start(const QVideoSurfaceFormat &format)
{
    const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat());
    const QSize size = format.frameSize();

    if (imageFormat != QImage::Format_Invalid && !size.isEmpty()) {
        this->imageFormat = imageFormat;
        imageSize = size;
        sourceRect = format.viewport();

        QAbstractVideoSurface::start(format);

        widget->updateGeometry();
        updateVideoRect();

        return true;
    } else {
        return false;
    }
}

void VideoFrameGrabber::stop()
{
    currentFrame = QVideoFrame();
    targetRect = QRect();

    QAbstractVideoSurface::stop();

    widget->update();
}

bool VideoFrameGrabber::present(const QVideoFrame &frame)
{
    if (frame.isValid()) 
    {
        QVideoFrame cloneFrame(frame);
        cloneFrame.map(QAbstractVideoBuffer::ReadOnly);
        const QImage image(cloneFrame.bits(),
                           cloneFrame.width(),
                           cloneFrame.height(),
                           QVideoFrame::imageFormatFromPixelFormat(cloneFrame .pixelFormat()));
        emit frameAvailable(image); // this is very important
        cloneFrame.unmap();
    }

    if (surfaceFormat().pixelFormat() != frame.pixelFormat()
            || surfaceFormat().frameSize() != frame.size()) {
        setError(IncorrectFormatError);
        stop();

        return false;
    } else {
        currentFrame = frame;

        widget->repaint(targetRect);

        return true;
    }
}

void VideoFrameGrabber::updateVideoRect()
{
    QSize size = surfaceFormat().sizeHint();
    size.scale(widget->size().boundedTo(size), Qt::KeepAspectRatio);

    targetRect = QRect(QPoint(0, 0), size);
    targetRect.moveCenter(widget->rect().center());
}

void VideoFrameGrabber::paint(QPainter *painter)
{
    if (currentFrame.map(QAbstractVideoBuffer::ReadOnly)) {
        const QTransform oldTransform = painter->transform();

        if (surfaceFormat().scanLineDirection() == QVideoSurfaceFormat::BottomToTop) {
           painter->scale(1, -1);
           painter->translate(0, -widget->height());
        }

        QImage image(
                currentFrame.bits(),
                currentFrame.width(),
                currentFrame.height(),
                currentFrame.bytesPerLine(),
                imageFormat);

        painter->drawImage(targetRect, image, sourceRect);

        painter->setTransform(oldTransform);

        currentFrame.unmap();
    }
}

VideoFrameGrabber.h

#ifndef VIDEOFRAMEGRABBER_H
#define VIDEOFRAMEGRABBER_H

#include <QtWidgets>

class VideoFrameGrabber : public QAbstractVideoSurface
{
    Q_OBJECT

public:
    VideoFrameGrabber(QWidget *widget, QObject *parent = 0);

    QList<QVideoFrame::PixelFormat> supportedPixelFormats(
            QAbstractVideoBuffer::HandleType handleType = QAbstractVideoBuffer::NoHandle) const;
    bool isFormatSupported(const QVideoSurfaceFormat &format) const;

    bool start(const QVideoSurfaceFormat &format);
    void stop();

    bool present(const QVideoFrame &frame);

    QRect videoRect() const { return targetRect; }
    void updateVideoRect();

    void paint(QPainter *painter);

private:
    QWidget *widget;
    QImage::Format imageFormat;
    QRect targetRect;
    QSize imageSize;
    QRect sourceRect;
    QVideoFrame currentFrame;

signals:
    void frameAvailable(QImage frame);
};
#endif //VIDEOFRAMEGRABBER_H

Note : in the .h, you will see I added a signal taking an image as a parameter. This will allow you to process your frame anywhere in your code. At the time, this signal took a QImage as a parameter, but you can of course take a QVideoFrame if you want to.


Now, we are ready to use this video frame grabber:

QMediaPlayer* player = new QMediaPlayer(this);
// no more QVideoProbe 
VideoFrameGrabber* grabber = new VideoFrameGrabber(this);
player->setVideoOutput(grabber);

connect(grabber, SIGNAL(frameAvailable(QImage)), this, SLOT(processFrame(QImage)));

Now you just have to declare a slot named processFrame(QImage image) and you will receive a QImage each time you will enter the method present of your VideoFrameGrabber.

I hope that this will help you!

Community
  • 1
  • 1
IAmInPLS
  • 4,051
  • 4
  • 24
  • 57
  • some stuipid question, before I used QWIdget to show video `player->setVideoOutput(ui->videoWidget);` now I must write as following ? `player = new QMediaPlayer(this); probe = new VideoFrameGrabber(ui->videoWidget, this); player->setVideoOutput(probe);` – user3627553 Jun 09 '16 at 13:38
  • Just this : `player = new QMediaPlayer(this); probe = new VideoFrameGrabber(this); player->setVideoOutput(probe);` – IAmInPLS Jun 09 '16 at 13:48
  • Hello @IAmInPLS , the frames are accessible but the video which was rendered in videoWidget is not visible now, videoWidget is completely black. How can we resolve the issue ? – Vikrant Aug 01 '19 at 10:21
  • Thanks for this! I hit a bug recently where the frames would be corrupted on a slower machine. I had to use `emit frameAvailable(image.copy());` to fix this. The QImage was sent between threads. – ForeverLearning Sep 26 '19 at 21:00
  • Your idea is fine but how I am supposed to play the video in a QVideoWidget if the mediaPlayer output is set to videosurface ? – taimoor1990 Feb 09 '20 at 05:38
  • 1
    in function bool VideoFrameGrabber::present(const QVideoFrame &frame), could be problem with construct QImage, and we can use constructor with more parameters const QImage image(cloneFrame.bits(), cloneFrame.width(), cloneFrame.height(), cloneFrame.bytesPerLine(), QVideoFrame::imageFormatFromPixelFormat(cloneFrame.pixelFormat())); – Andrii Rallo May 25 '20 at 20:19
  • @taimoor1990 Since Qt 5.15, `QMediaPlayer::setVideoOutput` has an overload that can take a list of surfaces, so you can pass both `videoWidget->videoSurface()` for rendering + any other surfaces for processing. There's an example of exactly that in my alternative implementation [here](https://stackoverflow.com/a/68088990/616460). – Jason C Jun 22 '21 at 18:53
  • @IAmInPLS ^ Qt 5.15 finally added a multiple-surface `setVideoOutput` to `QMediaPlayer`, so it's possible now to eliminate all the widget forwarding and repainting logic from custom surfaces now and just add the `QVideoWidget`s surface to the player's output list directly instead (see my answer for details). Also I haven't tried Qt6 yet but I've heard the multimedia framework has been significantly reworked, which I hope is a good thing. It's been kind of a mess so far. – Jason C Jun 22 '21 at 19:01
1

After Qt QVideoProbe documentation:

bool QVideoProbe::setSource(QMediaObject *mediaObject)

Starts monitoring the given mediaObject.

If there is no media object associated with mediaObject, or if it is zero, this probe will be deactivated and this function wil return true.

If the media object instance does not support monitoring video, this function will return false.

Any previously monitored objects will no longer be monitored. Passing in the same object will be ignored, but monitoring will continue.

So it seems your "media object instance does not support monitoring video"

IAmInPLS
  • 4,051
  • 4
  • 24
  • 57
DYangu
  • 611
  • 1
  • 7
  • 16
  • Take a look at my code - I dont use mediaRecorder I use a QMediaPlayer. Meanwhile the playback of video - is without problem – user3627553 Jun 09 '16 at 12:50
1

TL;DR: https://gist.github.com/JC3/a7bab65acbd7659d1e57103d2b0021ba (only file)


I had a similar issue (5.15.2; although in my case I was on Windows, was definitely using the DirectShow back-end, the probe attachment was returning true, the sample grabber was in the graph, but the callback wasn't firing).

I never figured it out but needed to get something working so I kludged one out of a QAbstractVideoSurface, and it's been working well so far. It's a bit simpler than some of the other implementations in this post, and it's all in one file.

Note that Qt 5.15 or higher is required if you intend to both process frames and play them back with this, since the multi-surface QMediaPlayer::setVideoOutput wasn't added until 5.15. If all you want to do is process video you can still use the code below as a template for pre-5.15, just gut the formatSource_ parts.

Code:

VideoProbeSurface.h (the only file; link is to Gist)

#ifndef VIDEOPROBESURFACE_H
#define VIDEOPROBESURFACE_H

#include <QAbstractVideoSurface>
#include <QVideoSurfaceFormat>

class VideoProbeSurface : public QAbstractVideoSurface {
    Q_OBJECT
public:
    VideoProbeSurface (QObject *parent = nullptr)
        : QAbstractVideoSurface(parent)
        , formatSource_(nullptr)
    {
    }
    void setFormatSource (QAbstractVideoSurface *source) {
        formatSource_ = source;
    }
    QList<QVideoFrame::PixelFormat> supportedPixelFormats (QAbstractVideoBuffer::HandleType type) const override {
        return formatSource_ ? formatSource_->supportedPixelFormats(type)
                             : QList<QVideoFrame::PixelFormat>();
    }
    QVideoSurfaceFormat nearestFormat (const QVideoSurfaceFormat &format) const override {
        return formatSource_ ? formatSource_->nearestFormat(format)
                             : QAbstractVideoSurface::nearestFormat(format);
    }
    bool present (const QVideoFrame &frame) override {
        emit videoFrameProbed(frame);
        return true;
    }
signals:
    void videoFrameProbed (const QVideoFrame &frame);
private:
    QAbstractVideoSurface *formatSource_;
};

#endif // VIDEOPROBESURFACE_H

I went for the quickest-to-write implementation possible so it just forwards supported pixel formats from another surface (my intent was to both probe and play back to a QVideoWidget) and you get whatever format you get. I just needed to grab subimages into QImages though, which handles most common formats. But you could modify this to force any formats you want (e.g. you might want to just return formats supported by QImage or filter out source formats not supported by QImage), etc.).

Example set up:

 QMediaPlayer *player = ...;
 QVideoWidget *widget = ...;

 // forward surface formats provided by the video widget:
 VideoProbeSurface *probe = new VideoProbeSurface(...);
 probe->setFormatSource(widget->videoSurface());

 // same signal signature as QVideoProbe's signal:
 connect(probe, &VideoProbeSurface::videoFrameProbed, ...);

 // the key move is to render to both the widget (for viewing)
 // and probe (for processing). fortunately, QMediaPlayer can
 // take a list:
 player->setVideoOutput({ widget->videoSurface(), probe });

Notes

The only really sketchy thing I had to do was const_cast the QVideoFrame on the receiver side (for read-only access), since QVideoFrame::map() isn't const:

    if (const_cast<QVideoFrame&>(frame).map(QAbstractVideoBuffer::ReadOnly)) {
        ...;
        const_cast<QVideoFrame&>(frame).unmap();
    }

But the real QVideoProbe would make you do the same thing so, I don't know what's up with that -- it's a strange API. I ran some tests with sw, native hw, and copy-back hw renderers and decoders and map/unmap in read mode seem to be functioning OK, so, whatever.

Performance-wise, the video will bog down if you spend too much time in the callback, so design accordingly. However, I didn't test QueuedConnection, so I don't know if that'd still have the issue (although the fact that the signal parameter is a reference would make me wary of trying it, as well as conceivable issues with the GPU releasing the memory before the slot ends up being called). I don't know how QVideoProbe behaves in this regard, either. I do know that, at least on my machine, I can pack and queue Full HD (1920 x 1080) resolution QImages to a thread pool for processing without slowing down the video.

You probably also want to implement some sort of auto-unmapper utility object for exception safe unmap(), etc. But again, that's not unique to this, same thing you'd have to do with QVideoProbe.

So hopefully that helps somebody else.

Example QImage Use

PS, example of packing arbitrarily-formatted QVideoFrames into a QImage in:

void MyVideoProcessor::onFrameProbed(const QVideoFrame &frame) {

    if (const_cast<QVideoFrame&>(frame).map(QAbstractVideoBuffer::ReadOnly)) {
        auto imageFormat = QVideoFrame::imageFormatFromPixelFormat(frame.pixelFormat());
        
        QImage image(frame.bits(), frame.width(), frame.height(), frame.bytesPerLine(), imageFormat);

        // *if* you want to use this elsewhere you must force detach:
        image = image.copy();
        // but if you don't need to use it past unmap(), you can just
        // use the original image instead of a copy.

        // <---- now do whatever with the image, e.g. save() it.

        // if you *haven't* copied the image, then, before unmapping,
        // kill any internal data pointers just to be safe:
        image = QImage();

        const_cast<QVideoFrame&>(frame).unmap();
    }

}

Notes about that:

  • Constructing a QImage directly from the data is fast and essentially free: no copies are done.
  • The data buffers are only technically valid between map and unmap, so if you intend to use the QImage outside of that scope, you'll want to use copy() (or anything else that forces a detach) to force a deep copy.
  • You also probably want to ensure that the original not-copied QImage is destructed before calling unmap. It's unlikely to cause problems but it's always a good idea to minimize how many invalid pointers are hanging around at any given time, and also the QImage docs say "The buffer must remain valid throughout the life of the QImage and all copies that have not been modified or otherwise detached from the original buffer". Best to be strict about it.
Jason C
  • 38,729
  • 14
  • 126
  • 182