0

How can I copy cv::Mat data back into the sampleBuffer?

My scenario as follow : I create a cv::Mat from pixelBuffer for landmark detection and add the landmarks to cv::Mat image data. I'd like to copy this cv::Mat into the sample buffer to be shown with the landmark.

Is this possible ?

I achieved this with dlib but need to know how to do it with cv::mat:

char *baseBuffer = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
img.reset();
long position = 0;
while (img.move_next()) {
        dlib::bgr_pixel& pixel = img.element();
        long bufferLocation = position * 4; //(row * width + column) * 4;
        char b = baseBuffer[bufferLocation];
        char g = baseBuffer[bufferLocation + 1];
        char r = baseBuffer[bufferLocation + 2];
        dlib::bgr_pixel newpixel(b, g, r);
        pixel = newpixel;

        position++;
    }
mosn
  • 311
  • 3
  • 16
  • Probably yes. A few more details? Some code? – Miki Aug 22 '16 at 16:00
  • I follow this [answer](http://stackoverflow.com/a/12355675/3649485) to convert `CVImageBufferRef` to cv::mat. now I'd like to put this cv::mat back into the sample buffer. I know how to do it with dlib by copying pixels to back into the sampleBuffer but don't know how to do it with openCV Sample code for dlib in next comment – mosn Aug 22 '16 at 16:22
  • Please post the code properly formatted into the question. It not readable in a comment – Miki Aug 22 '16 at 16:24
  • It seems code highlighting is not working in comments I'll edit the main question – mosn Aug 22 '16 at 16:31

1 Answers1

3

I am answering my own question.

First thing, you need to access the pixel data of cv::mat Image, I followed this great solution

Then you need to copy pixel into the buffer starting from the basebuffer. Following code should help you achieve this :

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
char *baseBuffer = (char *)CVPixelBufferGetBaseAddress(imageBuffer);   
long position = 0;
uint8_t* pixelPtr = (uint8_t*)targetImage.data;
int cn = targetImage.channels();
cv::Scalar_<uint8_t> rgbPixel;
for(int i = 0; i < targetImage.rows; i++)
{
    for(int j = 0; j < targetImage.cols; j++)
    {
        long bufferLocation = position * 4;

        rgbPixel.val[0] = pixelPtr[i*targetImage.cols*cn + j*cn + 0]; // B
        rgbPixel.val[1] = pixelPtr[i*targetImage.cols*cn + j*cn + 1]; // G
        rgbPixel.val[2] = pixelPtr[i*targetImage.cols*cn + j*cn + 2]; // R
        baseBuffer[bufferLocation] = rgbPixel.val[2];
        baseBuffer[bufferLocation + 1] = rgbPixel.val[1];
        baseBuffer[bufferLocation + 2] = rgbPixel.val[0];
        position++;
    }
}

Some things to take note of

  • make sure you CVPixelBufferLockBaseAddress and CVPixelBufferUnlockBaseAddress before and after the operation. I
  • am doing this on CV_8UC3, you might want to check your cv::mat
    type.
  • I haven't done the performance analysis but I am getting smooth output with this.
Community
  • 1
  • 1
mosn
  • 311
  • 3
  • 16
  • why `CV_8UC3`? `CMSampleBuffer` itself has 4 channels – user924 Mar 06 '18 at 13:10
  • yeah this method quite eats cpu, if don't have many functions in your app then it will be smooth, but if you have a lot of functions you are interesting in cpu usage – user924 Mar 06 '18 at 13:48
  • for me `worked` next solution `baseBuffer[bufferLocation] = pixelPtr[i*targetImage.cols*cn + j*cn + 0]; baseBuffer[bufferLocation + 1] = pixelPtr[i*targetImage.cols*cn + j*cn + 1]; baseBuffer[bufferLocation + 2] = pixelPtr[i*targetImage.cols*cn + j*cn + 2];` I guess it because I use `videoOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as String) : NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]` – user924 Mar 06 '18 at 14:13