I am getting a stream of images from a simulator in the form of a gil::image_view
and need to convert them to cv::Mat
for further processing. Up until now I more or less copied the code from this answer without really understanding it:
auto image_view = // data received from an API
using pixel = decltype(image_view)::value_type;
static_assert(sizeof(pixel) == 4, "RGBA");
pixel raw_data[image_view.width() * image_view.height()];
boost::gil::copy_pixels(image_view, boost::gil::interleaved_view(image_view.width(),
image_view.height(),
raw_data,
image_view.width() *
sizeof(pixel)));
auto mat = cv::Mat(image_view.height(), image_view.width(), CV_8UC4, raw_data);
I'm certain that the alpha channel is not used, so I can also define another image_view:
auto rgb_view = boost::gil::color_converted_view<boost::gil::rgb8_pixel_t>(image_view);
using pixel = decltype(rgb_view)::value_type;
static_assert(sizeof(pixel) == 3, "RGB");
...
for this case, copying pixels using boost::gil::copy_pixels(...)
would make sense, since there is no way of converting interleaved rgba8 to rgb8 in constant time.
Given the nature of the application, I'm pretty sure that the image is already in the memory somewhere. So I could technically just use the pointer to the first element to create my OpenCV image at the expense of using an extra channel.