I am currently in the process of transferring code from an old OpenCV example into OpenCV3 in Python (using PyObjC and the Quartz module). The Objective-C code takes a UIImage and creates a material that can be used by OpenCV. My python code takes a CGImage and does the same thing.
Here is the Objective-C code:
(cv::Mat)cvMatFromUIImage:(UIImage *)image {
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
Here is my Python equivalent:
def macToOpenCV(image):
color_space = CGImageGetColorSpace(image)
column = CGImageGetHeight(image)
row = CGImageGetWidth(image)
mat = np.ndarray(shape=(row, column, 4), dtype=np.uint8)
c_ref = CGBitmapContextCreate(mat,
row,
column,
8,
, # mat.step[0],
color_space,
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault)
CGContextDrawImage(c_ref, CGRectMake(0, 0, column, row), image)
return mat
I am fairly confident that I have most of this right currently, but I am lost what I should be calling for the equivalent of cvMat.step[0] in Numpy. I also would welcome some general code review on the code segment, because when I use cv2.imshow() I am not getting the image I expect at all :).
Thanks!