I'm an undergraduate student and I'm doing some HumanSeg iPhone app using CoreML
. Since my model needs resizing and black padding on the original video frames, I can't rely on Vision
(which only provides resizing but no black padding) and have to do the converting myself.
I have CVPixelBuffer
frames and I have converted it into cv::Mat
using the following codes:
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int bufferWidth = (int) CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = (int) CVPixelBufferGetHeight(pixelBuffer);
int bytePerRow = (int) CVPixelBufferGetBytesPerRow(pixelBuffer);
unsigned char *pixel = (unsigned char *) CVPixelBufferGetBaseAddress(pixelBuffer);
Mat image = Mat(bufferHeight, bufferWidth, CV_8UC4, pixel, bytePerRow);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
/*I'll do my resizing and padding here*/
// How can I implement this function?
convertToCVPixelBuffer(image);
But now, after I've done my preprocessing works, I have to convert the cv::Mat
back to a CVPixelBuffer
to feed it to the CoreML
model. How can I achieve this? (Or can Vision achieve black padding using some special techniques?)
Any help will be appreciated.