17

I am currently working with Javacv which makes use of the public void onPreviewFrame(byte[] data, Camera camera) camera function.

Since camera is deprecated, I have been looking into camera2 and MediaProjection. Both of these libraries make use of the ImageReader class.

Currently I instantiate such an ImageReader with the following code:

ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, PixelFormat.RGBA_8888, 2);

And attach an OnImageAvailableListener like this:

 private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
            = new ImageReader.OnImageAvailableListener() {

        @Override
        public void onImageAvailable(ImageReader reader) {

            mBackgroundHandler.post(new processImage(reader.acquireNextImage()));
        }

I have tried using the RGBA_8888 format with Javacv as per this thread: https://github.com/bytedeco/javacv/issues/298 but that doesn't work for me.

So instead I was thinking about using Render script to convert these Images to NV21(YUV_420_SP) format (which is the default output of camera in the onPreviewFrame function) as that worked for me with the camera library.

I have also read posts such as this one and this website to do the conversion, but these didn't work for me and I fear they will be too slow. Furthermore, my knowledge of C is severely limited. Basically it looks like I want the reverse operation of https://developer.android.com/reference/android/renderscript/ScriptIntrinsicYuvToRGB.html

So how can you go from an Image to a byte array that would match the output of the onPreviewFrame function i.e. NV21(YUV_420_SP) format? Preferably using Renderscript as it's faster.


Edit 1:

I have tried using ImageFormat.YUV_420_888 but to no avail. I kept getting errors like The producer output buffer format 0x1 doesn't match the ImageReader's configured buffer format. I switched back to PixelFormat.RGBA_8888 and discovered that there is only one plane in the ImageObject. The byte buffer of this plane is of size width*height*4 (one byte for R,G,B,A respectively). So I tried to convert this to NV21 format.

I modified code from this answer to produce the following function:

void RGBtoNV21(byte[] yuv420sp, byte[] argb, int width, int height) {
    final int frameSize = width * height;

    int yIndex = 0;
    int uvIndex = frameSize;

    int A, R, G, B, Y, U, V;
    int index = 0;
    int rgbIndex = 0;

    for (int i = 0; i < height; i++) {
        for (int j = 0; j < width; j++) {

            R = argb[rgbIndex++];
            G = argb[rgbIndex++];
            B = argb[rgbIndex++];
            A = argb[rgbIndex++];

            // RGB to YUV conversion according to
            // https://en.wikipedia.org/wiki/YUV#Y.E2.80.B2UV444_to_RGB888_conversion
            Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
            U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
            V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;

            // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor
            // of 2 meaning for every 4 Y pixels there are 1 V and 1 U.
            // Note the sampling is every other pixel AND every other scanline.
            yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
            if (i % 2 == 0 && index % 2 == 0) {
                yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
                yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
            }
            index++;
        }
    }
}

and invoke it using:

int mWidth = mImage.getWidth();
int mHeight = mImage.getHeight();

byte[] rgbaBytes = new byte[mWidth * mHeight * 4];
mImage.getPlanes()[0].getBuffer().get(rgbaBytes);
mImage.close();

byte[] yuv = new byte[mWidth * mHeight * 3 / 2];
RGBtoNV21(yuv, rgbaBytes, mWidth, mHeight);

Here mImage is an Image object produced by my ImageReader.

Yet this produces a result similar to this image:

malformed image

which is clearly malformed. Seems like my conversion is off but I cannot figure what exactly.

Community
  • 1
  • 1
Gooey
  • 4,740
  • 10
  • 42
  • 76
  • To better understand the problem, are your trying to process some RGBA images from ImageReader and then convert it back to YUV? – Miao Wang Aug 25 '16 at 22:47
  • @MiaoWang Yes. I try to convert the Images from the ImageReader back into the format you would obtain by using old camera API. I hope to do this with renderscript as this will be probably heavy for the CPU (converting 30 images per second). – Gooey Aug 26 '16 at 18:10
  • while your question is legitimate, I don't think this is the best approach to connect camera2 interface to javacv. I think that the preferred way is to request ImageFormat.YUV_420_888 and make minimal adaptations to your processing code to adopt this format, which is only by ordering different from NV21. If you really need, you can have a simple loop to rearrange bytes between ImageFormat.YUV_420_888 and NV21. – Alex Cohn Oct 03 '16 at 07:01
  • @AlexCohn I agree, however the new MediaProjection library requires you to make use of ImageReaders, returning Image objects. I will not use camera2 (yet) and stick to the camera package for the camera related activities. Additionally, I would like to reuse my javacv code which handles data in NV21(YUV_420_SP) format. Yet, I believe that YUV_420_888 did not work on my (and more) device. However, if you have more information on converting from Imageformat YUV_420_888 to NV21 or some other approach, I am all ears. – Gooey Oct 03 '16 at 09:22
  • Which device is that? Maybe it has only limited camera2 support? – Alex Cohn Oct 03 '16 at 10:10
  • @AlexCohn An alcatel onetouch. But regardless, I am willing to try using ImageFormat.YUV_420_888 and convert that to NV21 format. This is image encoding right? Is the pixel format still the same as using the (old) camera library? If I understand correctly, NV21 is the image format and YUV_420_SP the color encoding? – Gooey Oct 04 '16 at 08:24
  • I strongly recommend not to use camera2 on devices that only return LEGACY or LIMITED to [INFO_SUPPORTED_HARDWARE_LEVEL](https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics.html#INFO_SUPPORTED_HARDWARE_LEVEL) query. There will be no advantage using the new API, but only the buggy implementation of compatibility layer by the OEM. Especially if you already have code that works with old Camera api in hand. – Alex Cohn Oct 05 '16 at 10:51
  • @AlexCohn well this app is not for my phone only. And above all, I am using the old camera api, yet for the mediaprojecting I still need to use an imagereader object. So the problem remains there. – Gooey Oct 05 '16 at 11:17
  • There is no such name YUV_420_SP. YCrCb_420_SP is a deprecated synonym of NV21. – Alex Cohn Oct 05 '16 at 11:36
  • @AlexCohn I got that name from here; https://en.wikipedia.org/wiki/YUV#Y.E2.80.B2UV420sp_.28NV21.29_to_RGB_conversion_.28Android.29 Regardless, i will look into converting YUV_420_888 to NV21 – Gooey Oct 05 '16 at 14:29
  • no underscores in Wiki. Also note that Y'UV is not the same as YCbCr in their notation. The difference is rooted deep in some ancient TV protocols. – Alex Cohn Oct 05 '16 at 14:44
  • @AlexCohn Thanks for your explanation and answer, it really helped me to better understand these image formats and their notation. Unfortunately YUV_420_888 does not work on my emulators or real device. I also found [this answer](http://stackoverflow.com/a/35728962/1360853) to one of my old question I completely forgot. He suggests to stick to RGBA_8888 format and convert that to NV21. I have attempted to do this, yet did not manage to make it fully functional. I have edited my answer to show my attempt, could you please take a look at it? – Gooey Oct 07 '16 at 11:32
  • fadden wrote there about screen capture, which is a very different beast from camera capture. Note that internally your camera produces NV21, so by forcing yourself to use camera2 API, you cause two not very cheap translations per frame. As an exercise, conversion of RGBA_8888 to NV21 is legitimate. As part of a deployed application - not. – Alex Cohn Oct 08 '16 at 06:43
  • [here](https://medium.com/@qhutch/android-simple-and-fast-image-processing-with-renderscript-2fa8316273e1) you can find a renderscript method that coverts RGB to YUV, as well as the examples that show how to call such method from your Java code. But still, for camera it's only relevant as an exercise. – Alex Cohn Oct 08 '16 at 07:20
  • Ah, perhaps I should clarify: I intend to use the same code for screen recording and camera2. Both make use of ImageReaders which produce Image objects. Currently my code works for the old camera (NV21) output. That's why I want to convert the RGBA Image objects to NV21 format. I want to support devices not capable of running camera2, in other words stick to the old camera API for now. – Gooey Oct 08 '16 at 16:22

1 Answers1

6
@TargetApi(19)
public static byte[] yuvImageToByteArray(Image image) {

    assert(image.getFormat() == ImageFormat.YUV_420_888);

    int width = image.getWidth();
    int height = image.getHeight();

    Image.Plane[] planes = image.getPlanes();
    byte[] result = new byte[width * height * 3 / 2];

    int stride = planes[0].getRowStride();
    assert (1 == planes[0].getPixelStride());
    if (stride == width) {
        planes[0].getBuffer().get(result, 0, width*height);
    }
    else {
        for (int row = 0; row < height; row++) {
            planes[0].getBuffer().position(row*stride);
            planes[0].getBuffer().get(result, row*width, width);
        }
    }

    stride = planes[1].getRowStride();
    assert (stride == planes[2].getRowStride());
    int pixelStride = planes[1].getPixelStride();
    assert (pixelStride == planes[2].getPixelStride());
    byte[] rowBytesCb = new byte[stride];
    byte[] rowBytesCr = new byte[stride];

    for (int row = 0; row < height/2; row++) {
        int rowOffset = width*height + width/2 * row;
        planes[1].getBuffer().position(row*stride);
        planes[1].getBuffer().get(rowBytesCb);
        planes[2].getBuffer().position(row*stride);
        planes[2].getBuffer().get(rowBytesCr);

        for (int col = 0; col < width/2; col++) {
            result[rowOffset + col*2] = rowBytesCr[col*pixelStride];
            result[rowOffset + col*2 + 1] = rowBytesCb[col*pixelStride];
        }
    }
    return result;
}

I have published another function with similar requirements. That new implementation tries to take advantage of the fact that quite often, YUV_420_888 is only NV21 in disguise.

Alex Cohn
  • 56,089
  • 9
  • 113
  • 307
  • I can't use either of the __bulk get methods__. Trying give the error `java.lang.Exception: Failed resolving method get on class java.nio.DirectByteBuffer`. This is the `getBuffer` returned with the captured `Image` from `ImageReader` via `camera2`. – Mudlabs May 01 '19 at 03:50
  • @Mudlabs: sorry, there was a mistake in this code. Fixed now. Thanks for reporting! – Alex Cohn May 01 '19 at 06:29