I am struggling with writing a video player which uses OpenCV to read a frame from video and display it in a QWidget.
This is my code:
// video caputre is opened here
...
void VideoPlayer::run()
{
int sleep = 1000 / static_cast<unsigned long>(video_capture_.get(CV_CAP_PROP_FPS));
forever
{
QScopedPointer<cv::Mat> frame(new cv::Mat);
if(!video_capture_.read(*frame))
break;
cv::resize(*frame, *frame, cv::Size(640, 360), 0, 0, cv::INTER_CUBIC);
cv::cvtColor(*frame, *frame, CV_BGR2RGB);
QImage image(frame->data, frame->cols, frame->rows, QImage::Format_RGB888);
emit signalFrame(image); // notifying QWidget to draw an image
msleep(sleep); // wait before we read another frame
}
}
and on the QWidget side, I am just using this image and drawing it in paintEvent
.
It just looks to me that parameter sleep
doesn't play important role here. As much as I decrease it (to get more FPS) the video is just not smooth.
The only thing here left for me is that I gave up on that approach because it doesn't work, but I wanted to ask here one more time, just to be sure - am I doing something wrong here?