I'm sending images over the network (from Python) and want to create OpenCV Mat
s from them on the receiving end (in C++).
They are created like this:
image = self.camera.capture_image() # np.array of dtype np.uint8
h, w, c = image.shape # 4 channels
image = np.transpose(image, (2, 0, 1)) # transpose because channels come first in OpenCV (?)
image = np.ascontiguousarray(image, dtype='>B') # big-endian bytes
bytess = image.tobytes(order='C')
After this, I should have an array where the 3 dimensions are flattened such that individual rows are appended together for each channel and then the channels are appended to form the final byte buffer. I have verified that my understanding is correct and the following holds
bytess[channel*height*width + i*wwidth + j] == image[channel, i, j]
[I think the above part is actually unimportant, because if it's incorrect, I will get an incorrectly displayed image, but at least I would have an image, which is one step further than I am now.]
Now on the other side I am trying to do this:
char* pixel_data = … // retrieve array of bytes from message
// assume height, width and channels are known
const int sizes[3] = {channels, width, height};
const size_t steps[3] = {(size_t)height * (size_t)width, (size_t)height};
cv::Mat image(3, sizes, CV_8UC1, pixel_data, steps);
So, I create a Matrix with three dimensions where the element type is byte
. I am not so sure I'm determining the steps
correctly, but I think it matches the documentation.
But running this just crashes with
error: (-5:Bad argument) Unknown array type in function 'cvarrToMat'
What is the correct way to serialise an RGBA (or BGRA for OpenCV) image to a byte buffer and create a cv::Mat
from it with the C++ API?