5

To start with the question: what is the most efficient way to initialize and use ImageReader with the camera2 api, knowing that I am always going to convert the capture into a Bitmap?

I'm playing around with the Android camera2 samples, and everything is working quite nicely. However, for my purposes I always need to perform some post processing on captured still images, for which I require a Bitmap object. Presently I am using BitmapFactory.decodeByteArray(...) using the bytes coming from the ImageReader.acquireNextImage().getPlanes()[0].getBuffer() (I'm paraphrasing). While this works acceptably, I still feel like there should be a way to improve performance. The captures are encoded in ImageFormat.Jpeg and need to be decoded again to get the Bitmap, which seems redundant. Ideally I'd obtain them in PixelFormat.RGB_888 and just copy that to a Bitmap using Bitmap.copyPixelsFromBuffer(...), but it doesn't seem like initializing an ImageReader with that format has reliable device support. YUV_420_888 could be another option, but looking around SO it seems that it requires jumping through some hoops to decode into a Bitmap. Is there a recommended way to do this?

Elte Hupkes
  • 2,773
  • 2
  • 23
  • 18

3 Answers3

4

The question is what you are optimizing for.

Jpeg is without doubt the easiest format supported by all devices. Decoding it to bitmap is not redundant as it seems because encoding the picture into jpeg he is usually performed by kind of hardware. This means that uses minimal bandwidth to transmit the image from the sensor to your application. on some devices this is the only way to get maximum resolution. BitmapFactory.decodeByteArray(...) is often performed by special hardware decoder too. The major problem with this call is that may cause out of memory exception, because the output bitmap is too big. So you will find many examples the do subsampled decoding, tuned for the use case where the bitmap must be displayed on the phone screen.

If your device supports required resolution with RGB_8888, go for it: this needs minimal post-processing. But scaling such image down may be more CPU intensive then dealing with Jpeg, and memory consumption may be huge. Anyways, only few devices support this format for camera capture.

As for YUV_420_888 and other YUV formats, the advantages over Jpeg are even smaller than for RGB.

If you need the best quality image and don't have memory limitations, you should go for RAW images which are supported on most high-end devices these days. You will need your own conversion algorithm, and probably make different adaptations for different devices, but at least you will have full command of the picture acquisition.

Alex Cohn
  • 56,089
  • 9
  • 113
  • 307
  • It's good to know that the JPEG encoding is done by hardware - I suspected as much, but I wasn't certain. I actually need to extract a piece from the final image, but I want that in the highest possible quality; so I can't really avoid decoding the entire bitmap. It also needs to work on commonplace devices though. I guess I need to experiment with some devices to find the best solution and see if I don't get OutOfMemory exceptions. – Elte Hupkes Jul 14 '18 at 19:18
  • The built-in BitmapFactory does not support this, but you can run your custom Jpeg decoder to only process the cropped area. This involves writing some C code around the libjpeg library. With RAW, you'll have a faster code, but higher chance of OOM. – Alex Cohn Jul 15 '18 at 05:03
  • Here is the link: https://github.com/libjpeg-turbo/libjpeg-turbo/issues/34. See also https://stackoverflow.com/questions/14068124/crop-image-from-byte-array. – Alex Cohn Jul 15 '18 at 10:01
  • One more remark, with your permission: *"to experiment with some devices to find the best solution"* is a suboptimal way. Be prepared to use different solutions for different device classes: ones that have full camera2 support, others that have legacy camera2, and so on. – Alex Cohn Jul 15 '18 at 10:15
  • 2
    Haha yes, you should read that as "if I implement something, test it on all devices I can get my hands on to see if it actually works". My experience with Android is that reasoning about the problem only takes you as far as the first device that refuses to run it anyway ;). Good advice though. – Elte Hupkes Jul 16 '18 at 05:58
  • 2
    That's why the crowd wisdom is important. This is our (developers') only chance to fight the device (and network) diversity. The advice I am giving here is not based only on learning the official docs, and not only on my personal experience with hundreds different models in different circumstances, but also on following closely the reports from my colleagues around the world. – Alex Cohn Jul 16 '18 at 07:54
1

After a while I now sort of have an answer to my own question, albeit not a very satisfying one. After much consideration I attempted the following:

  • Setup a ScriptIntrinsicYuvToRGB RenderScript of the desired output size
  • Take the Surface of the used input allocation, and set this as the target surface for the still capture
  • Run this RenderScript when a new allocation is available and convert the resulting bytes to a Bitmap

This actually worked like a charm, and was super fast. Then I started noticing weird behavior from the camera, which happened on other devices as well. As it would turn out, the camera HAL doesn't really recognize this as a still capture. This means that (a) the flash / exposure routines don't fire in this case when they need to and (b) if you have initiated a precapture sequence before your capture auto-exposure will remain locked unless you manage to unlock it using AE_PRECAPTURE_TRIGGER_CANCEL (API >= 23) or some other lock / unlock magic which I couldn't get to work on either device. Unless you're fine with this only working in optimal lighting conditions where no exposure adjustment is necessary, this approach is entirely useless.

I have one more idea, which is to setup an ImageReader with a YUV_420_888 output and incorporating the conversion routine from this answer to get RGB pixels from it. However, I'm actually working with Xamarin.Android, and RenderScript user scripts are not supported there. I may be able to hack around that, but it's far from trivial.

For my particular use case I have managed to speed up JPEG decoding to acceptable levels by carefully arranging background tasks with subsampled decodes of the versions I need at multiple stages of my processing, so implementing this likely won't be worth my time any time soon. If anyone is looking for ideas on how to approach something similar though; that's what you could do.

Elte Hupkes
  • 2,773
  • 2
  • 23
  • 18
0

Change the Imagereader instance using a different ImageFormat like this:

ImageReader.newInstance(width, height, ImageFormat.JPEG, 1)

Einzig7
  • 543
  • 1
  • 6
  • 22
emandt
  • 2,547
  • 2
  • 16
  • 20
  • 1
    As mentioned, I thought about that. `PixelFormat.RGB_888` is likely not to work on may devices out there though, so that's not really an option. Slightly different Googling terms yielded a question similar to mine though: https://stackoverflow.com/questions/25776671/reading-rgb-images-with-an-imagereader. – Elte Hupkes Jul 14 '18 at 11:02
  • Then I think you have to execute a small benchmark to detect the best available PixelFormat before create the ImageReader instance. – emandt Jul 14 '18 at 20:26