1

I want to include an existing openCV application into a GUI created with Qt. I've found some similar questions on stackoverflow

QT How to embed an application into QT widget

Run another executable in my Qt app

The problem is, that I don't want to simply launch the openCV application like I could with QProcess. The OpenCV application has a "MouseListener", so if I click on the window, it should still call the function of the openCV App. Further would I like to display the detected coordinates in labels of the Qt GUI. Therefore it has to be some kind of interaction.

I've read about the createwindowContainer function (http://blog.qt.io/blog/2013/02/19/introducing-qwidgetcreatewindowcontainer/) but since I am not very familiar with Qt I'm not sure if this is the right choice and how to use it.

I am using Linux Mint 17.2, opencv version 3.1.0 and Qt version 4.8.6

thank you for your inputs

Community
  • 1
  • 1
dst
  • 11
  • 3
  • Where's the problem in using your cv code in the new project? – Micka May 27 '16 at 15:23
  • Then I have to adapt it to the Qt interface. For example when I react to a mouse click on the image I have to implement QMouseEvents and so on. If I simply display my old opencv application inside a window the mouseclick would still be handled inside my original app. – dst May 28 '16 at 12:26
  • not sure whether this function still exists but in the past you could call cvGetWindowHandle to get a win api window handle. MAYBE you can embed that one in Qt. – Micka May 28 '16 at 12:43
  • ah linux... not sure how cvGetWindowHandle worked there – Micka May 28 '16 at 12:44

2 Answers2

0

I haven't actually solved the problem how I wanted to at the beginning. But now it's working. If someone has the same problem, maybe my solution can provide some ideas. If you want to display a video in qt or if you have problems with the OpenCV libraries, maybe I can help.

following are a few code snippets. they are not very much commented but I hope the concept is clear:

First I have a MainWindow with a label that I promoted to the type of my CustomLabel. The CustomLabel is my container to display the video and react on my mouse inputs.

CustomLabel::CustomLabel(QWidget* parent) : QLabel(parent), currentImage(NULL), 
tickrate_ms(33), vid_fps(0), video_width(0), video_height(0), myTimer(NULL), cap(NULL)
{
// init variables
showPoints = true;
calculatedCenter = cv::Point(0,0);
oldCenter = cv::Point(0,0);
currentState = STATE_NO_STREAM;
NOF_corners = 30; //default init value
termcrit = cv::TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 30,0.01);
// enable mouse Tracking
this->setMouseTracking(true);
// connect signals with slots
QObject::connect(getMainWindow(), SIGNAL(sendFileOpen()), this, SLOT(onOpenClick()));
QObject::connect(getMainWindow(), SIGNAL(sendWebcamOpen()), this, SLOT(onWebcamBtnOpen()));
QObject::connect(getMainWindow(), SIGNAL(closeVideoStreamSignal()), this, SLOT(onCloseVideoStream()));
}

You have to overwrite the paintEvent-Method:

void CustomLabel::paintEvent(QPaintEvent *e){
QPainter painter(this);

// When no image is loaded, paint the window black
if (!currentImage){
    painter.fillRect(QRectF(QPoint(0, 0), QSize(width(), height())), Qt::black);
    QWidget::paintEvent(e);
    return;
}

// Draw a frame from the video
drawVideoFrame(painter);

QWidget::paintEvent(e);
}

method that was called in paintEvent:

void CustomLabel::drawVideoFrame(QPainter &painter){
painter.drawImage(QRectF(QPoint(0, 0), QSize(width(), height())), *currentImage, 
QRectF(QPoint(0, 0), currentImage->size()));
}

And on every tick of my timer I call onTick()

void CustomLabel::onTick() {
/* This method is called every couple of milliseconds.
 * It reads from the OpenCV's capture interface and saves a frame as QImage
 * the state machine is implemented here. every tick is handled
 */
if(cap->isOpened()){
    switch(currentState) {
    case STATE_IDLE:
        if (!cap->read(currentFrame)){
            qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_IDLE";
        }
        break;
    case STATE_DRAWING:
        if (!cap->read(currentFrame)){
            qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_DRAWING";
        }
        currentFrame.copyTo(currentCopy);
        cv::circle(currentCopy, cv::Point(focusPt.x*xScale, focusPt.y*yScale), 
sqrt((focusPt.x - currentMousePos.x())*(focusPt.x - currentMousePos.x())*xScale*xScale+(focusPt.y - currentMousePos.y())*
(focusPt.y - currentMousePos.y())*yScale*yScale), cv::Scalar(0, 0, 255), 2, 8, 0);
        //qDebug() << "focus pt x " <<  focusPt.x << "y " << focusPt.y;
        break;
    case STATE_TRACKING:
        if (!cap->read(currentFrame)){
            qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_TRACKING";
        }
        cv::cvtColor(currentFrame, currentFrame, CV_BGR2GRAY, 0);
        if(initGrayFrame){
            currentGrayFrame.copyTo(previousGrayFrame);
            initGrayFrame = false;
            return;
        }
        cv::calcOpticalFlowPyrLK(previousGrayFrame, currentFrame, previousPts, currentPts, featuresFound, err, cv::Size(21, 21),
                                 3, termcrit, 0, 1e-4);
        AcquireNewPoints();
        currentCopy = CalculateCenter(currentFrame, currentPts);
        if(showPoints){
            DrawPoints(currentCopy, currentPts);
        }
        break;
    case STATE_LOST_POLE:
        currentState = STATE_IDLE;
        initGrayFrame = true;
        cv::cvtColor(currentFrame, currentFrame, CV_GRAY2BGR);
        break;
    default:
        break;
    }
    // if not tracking, draw currentFrame
    // OpenCV uses BGR order, convert it to RGB
    if(currentState == STATE_IDLE) {
        cv::cvtColor(currentFrame, currentFrame, CV_BGR2RGB);
        memcpy(currentImage->scanLine(0), (unsigned char*)currentFrame.data, currentImage->width() * currentImage->height() * currentFrame.channels()); 
    } else {
        cv::cvtColor(currentCopy, currentCopy, CV_BGR2RGB);
        memcpy(currentImage->scanLine(0), (unsigned char*)currentCopy.data, currentImage->width() * currentImage->height() * currentCopy.channels());
        previousGrayFrame = currentFrame;
        previousPts = currentPts;
    }
}
// Trigger paint event to redraw the window
update();
}

Don't mind the yScale and xScale factors, they are just for the opencv drawing functions because the customLabel size is not the same as the video resolution

dst
  • 11
  • 3
0

OpenCV is used just for image processing. If you know to convert cv::Mat to any other required format, you can include OpenCV with any GUI development kit. For Qt, you can convert cv::Mat to QImage and then use it anywhere in Qt SDK. This example shows OpenCV and Qt integration including threading and webcam access. The webcam is accessed using OpenCV and cv::Mat received is converted to QImage and rendered onto a QLabel. https://github.com/nickdademo/qt-opencv-multithreaded The code contains MatToQImage() function which shows the conversion from cv::Mat to QImage. The integration is pretty simple as everything is in C++.

Gaurav Raj
  • 699
  • 6
  • 21
  • thank you for your answer. I've already solved my problem but without threading. you can see parts of my solution above ;) – dst Jun 14 '16 at 15:36