To start with the question: what is the most efficient way to initialize and use ImageReader
with the camera2 api, knowing that I am always going to convert the capture into a Bitmap
?
I'm playing around with the Android camera2 samples, and everything is working quite nicely. However, for my purposes I always need to perform some post processing on captured still images, for which I require a Bitmap
object. Presently I am using BitmapFactory.decodeByteArray(...)
using the bytes coming from the ImageReader.acquireNextImage().getPlanes()[0].getBuffer()
(I'm paraphrasing). While this works acceptably, I still feel like there should be a way to improve performance. The captures are encoded in ImageFormat.Jpeg
and need to be decoded again to get the Bitmap
, which seems redundant. Ideally I'd obtain them in PixelFormat.RGB_888
and just copy that to a Bitmap using Bitmap.copyPixelsFromBuffer(...)
, but it doesn't seem like initializing an ImageReader
with that format has reliable device support. YUV_420_888
could be another option, but looking around SO it seems that it requires jumping through some hoops to decode into a Bitmap
. Is there a recommended way to do this?