8

I do not have a background in imaging or graphics, so please bear with me :)

I am using JavaCV in one of my projects. In the examples, a Frame is constructed which has a buffer of a certain size.

When using the public void onPreviewFrame(byte[] data, Camera camera) function in Android, copying this data byte array is no problem if you declare the Frame as new Frame(frameWidth, frameHeight, Frame.DEPTH_UBYTE, 2); where frameWidth and frameHeight are declared as

Camera.Size previewSize = cameraParam.getPreviewSize();
int frameWidth = previewSize.width;
int frameHeight = previewSize.height;

Recently, Android added a method to capture your screen. Naturally, I wanted to grab those images and also covert them to Frames. I modified the example code from Google to use the ImageReader.

This ImageReader is constructed as ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, PixelFormat.RGBA_8888, 2);. So currently it uses the RGBA_8888 pixel format. I use the following code to copy the bytes to the Frame, which is instantiated as new Frame(DISPLAY_WIDTH, DISPLAY_HEIGHT, Frame.DEPTH_UBYTE, 2);:

ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
mImage.close();
((ByteBuffer) frame.image[0].position(0)).put(bytes);

But this gives me a java.nio.BufferOverflowException. I printed the sizes of both buffers and the Frame's buffer size is 691200 whereas the bytes array above is of size 1413056. Figuring out how this latter number is constructed failed because I ran into this native call. So clearly, this won't work out.

After quite a bit of digging I found out that the NV21 image format is "the default format for Camera preview images, when not otherwise set with setPreviewFormat(int)", but the ImageReader class does not support the NV21 format (see the format parameter). So that's tough luck. In the documentation it also reads that "For the android.hardware.camera2 API, the YUV_420_888 format is recommended for YUV output instead."

So I tried creating an ImageReader like this ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, ImageFormat.YUV_420_888, 2);, but this gives me java.lang.UnsupportedOperationException: The producer output buffer format 0x1 doesn't match the ImageReader's configured buffer format 0x23. so that won't work either.

As a last resort, I tried to convert RGBA_8888 to YUV myself using e.g. this post, but I fail to understand how I can obtain an int[] rgba as per the answer.

So, TL;DR how can I obtain NV21 image data like you get in Android's public void onPreviewFrame(byte[] data, Camera camera) camera function to instantiate my Frame and work with it using Android's ImageReader (and Media Projection)?

Edit (25-10-2016)

I have created the following conversion runnable to go from RGBA to NV21 format:

private class updateImage implements Runnable {

    private final Image mImage;

    public updateImage(Image image) {
        mImage = image;
    }

    @Override
    public void run() {

        int mWidth = mImage.getWidth();
        int mHeight = mImage.getHeight();

        // Four bytes per pixel: width * height * 4.
        byte[] rgbaBytes = new byte[mWidth * mHeight * 4];
        // put the data into the rgbaBytes array.
        mImage.getPlanes()[0].getBuffer().get(rgbaBytes);

        mImage.close(); // Access to the image is no longer needed, release it.

        // Create a yuv byte array: width * height * 1.5 ().
        byte[] yuv = new byte[mWidth * mHeight * 3 / 2];
        RGBtoNV21(yuv, rgbaBytes, mWidth, mHeight);
        ((ByteBuffer) yuvImage.image[0].position(0)).put(yuv);
    }

    void RGBtoNV21(byte[] yuv420sp, byte[] argb, int width, int height) {
        final int frameSize = width * height;

        int yIndex = 0;
        int uvIndex = frameSize;

        int A, R, G, B, Y, U, V;
        int index = 0;
        int rgbIndex = 0;

        for (int i = 0; i < height; i++) {
            for (int j = 0; j < width; j++) {

                R = argb[rgbIndex++];
                G = argb[rgbIndex++];
                B = argb[rgbIndex++];
                A = argb[rgbIndex++]; // Ignored right now.

                // RGB to YUV conversion according to
                // https://en.wikipedia.org/wiki/YUV#Y.E2.80.B2UV444_to_RGB888_conversion
                Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
                U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
                V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;

                // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor
                // of 2 meaning for every 4 Y pixels there are 1 V and 1 U.
                // Note the sampling is every other pixel AND every other scanline.
                yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
                if (i % 2 == 0 && index % 2 == 0) {
                    yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
                    yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
                }
                index++;
            }
        }
    }
}

The yuvImage object is initialized as yuvImage = new Frame(DISPLAY_WIDTH, DISPLAY_HEIGHT, Frame.DEPTH_UBYTE, 2);, the DISPLAY_WIDTH and DISPLAY_HEIGHT are just two integers specifying the display size. This is the code where a background handler handles the onImageReady:

private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
            = new ImageReader.OnImageAvailableListener() {

        @Override
        public void onImageAvailable(ImageReader reader) {
            mBackgroundHandler.post(new updateImage(reader.acquireNextImage()));
        }

    };

...

mImageReader = ImageReader.newInstance(DISPLAY_WIDTH, DISPLAY_HEIGHT, PixelFormat.RGBA_8888, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);

The methods work and I at least don't get any errors, but the output image is malformed. What is going wrong in my conversion? An example image that is being created:

example of malformed image

Edit (15-11-2016)

I have modified the RGBtoNV21 function to be the following:

void RGBtoNV21(byte[] yuv420sp, int width, int height) {
    try {
        final int frameSize = width * height;

        int yIndex = 0;
        int uvIndex = frameSize;
        int pixelStride = mImage.getPlanes()[0].getPixelStride();
        int rowStride = mImage.getPlanes()[0].getRowStride();
        int rowPadding = rowStride - pixelStride * width;
        ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();

        Bitmap bitmap = Bitmap.createBitmap(getResources().getDisplayMetrics(), width, height, Bitmap.Config.ARGB_8888);

        int A, R, G, B, Y, U, V;
        int offset = 0;

        for (int i = 0; i < height; i++) {
            for (int j = 0; j < width; j++) {

                // Useful link: https://stackoverflow.com/questions/26673127/android-imagereader-acquirelatestimage-returns-invalid-jpg

                R = (buffer.get(offset) & 0xff) << 16;     // R
                G = (buffer.get(offset + 1) & 0xff) << 8;  // G
                B = (buffer.get(offset + 2) & 0xff);       // B
                A = (buffer.get(offset + 3) & 0xff) << 24; // A
                offset += pixelStride;

                int pixel = 0;
                pixel |= R;     // R
                pixel |= G;  // G
                pixel |= B;       // B
                pixel |= A; // A
                bitmap.setPixel(j, i, pixel);

                // RGB to YUV conversion according to
                // https://en.wikipedia.org/wiki/YUV#Y.E2.80.B2UV444_to_RGB888_conversion
//                        Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
//                        U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
//                        V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;

                Y = (int) Math.round(R *  .299000 + G *  .587000 + B *  .114000);
                U = (int) Math.round(R * -.168736 + G * -.331264 + B *  .500000 + 128);
                V = (int) Math.round(R *  .500000 + G * -.418688 + B * -.081312 + 128);

                // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor
                // of 2 meaning for every 4 Y pixels there are 1 V and 1 U.
                // Note the sampling is every other pixel AND every other scanline.
                yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
                if (i % 2 == 0 && j % 2 == 0) {
                    yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
                    yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
                }
            }
            offset += rowPadding;
        }

        File file = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES).getAbsolutePath(), "/Awesomebitmap.png");
        FileOutputStream fos = new FileOutputStream(file);
        bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
    } catch (Exception e) {
        Timber.e(e, "Converting image to NV21 went wrong.");
    }
}

Now the image is no longer malformed, but the chroma is off.

Wrong color chroma

The right side is the bitmap that is being created in that loop, the left side is the NV21 saved to an image. So the RGB pixels are processed correctly. Clearly the chroma is off, but the RGB to YUV conversion should be the same one as depicted by wikipedia. What could be wrong here?

Community
  • 1
  • 1
Gooey
  • 4,740
  • 10
  • 42
  • 76
  • Are you using Camera 1 API? – Volodymyr Kulyk Nov 15 '16 at 15:06
  • No, I am using an ImageReader and would like to get output similar to that of the Camera 1 API (NV21). – Gooey Nov 15 '16 at 15:36
  • I want you to try again to use `ImageFormat.YUV_420_888`, but with supported camera preview size (don't use Display size). [Get list of supported sizes](http://stackoverflow.com/q/21668394/3226984), and pick one. – Volodymyr Kulyk Nov 15 '16 at 16:05
  • That format crashes on my device since I run Android 5, which does not support that format. (It is also noted in the question) – Gooey Nov 15 '16 at 17:28
  • Based on your comments under the accepted answer it seems that you managed to fix chroma off issue, could you post the solution? I'm losing my hairs over this problematic conversion. – Kierek93 Jan 31 '21 at 19:26

1 Answers1

5

Generally speaking, the point of ImageReader is to give you raw access to the pixels sent to the Surface with minimal overhead, so attempting to have it perform color conversions doesn't make sense.

For the Camera you get to pick one of two output formats (NV21 or YV12), so pick YV12. That's your raw YUV data. For screen capture the output will always be RGB, so you need to pick RGBA_8888 (format 0x1) for your ImageReader, rather than YUV_420_888 (format 0x23). If you need YUV for that, you will have to do the conversion yourself. The ImageReader gives you a series of Plane objects, not a byte[], so you will need to adapt to that.

fadden
  • 51,356
  • 5
  • 116
  • 166
  • Thanks for your answer, do you have any pointer on how to do the conversion myself? – Gooey Mar 04 '16 at 12:54
  • You can find bits of code on the web, notably in answers to questions on stackoverflow. I think it's using BT.601, but I don't remember for sure. (See https://en.wikipedia.org/wiki/YUV#Converting_between_Y.27UV_and_RGB) – fadden Mar 04 '16 at 16:34
  • Hi @fadden Sorry to bother you with a half year old question, but I have continued to work on my project (learned a lot regarding images and formats) and managed to progress a bit. It works now but the images are malformed, since you have a lot of knowledge on this topic, would you mind take a look at the code I provided? I would appreciate your help. – Gooey Oct 25 '16 at 11:42
  • Pixels sliding off like that usually means you've got an incorrect value for width or stride. Check the row and pixel strides in the plane (https://developer.android.com/reference/android/media/Image.Plane.html); my guess would be that the row stride is larger than the width. Do you have a correct copy of the image? It helps to see correct vs. incorrect. – fadden Oct 25 '16 at 16:35
  • I have been messing around with the code for a while now. You were right about the strides, I have adjusted the code to take that into account. However, the chroma is off now. I have looked (and tried) at the two essentially the same conversion algorithms between RGB and YUV but there is something off. I added some code to create a bitmap and set the pixels in this bitmap manually and this shows fine, so my error must be in the conversion. Do you have any tips how to debug this chroma issue? – Gooey Nov 15 '16 at 15:24
  • Either the math is off or the YUV layout is wrong. You seem to have purple, white, red, and others mapped to gold... but if the luma channel were right you should at least see variation in intensity. You've got something for that one line of text, so it's not completely hosed, but my guess is that the color values are coming out excessively large and getting clamped to 255. I don't see anything obviously wrong in the code, so you'll need to single-step through it or log in/out values. Maybe feed a simple black & white image in to reduce the variables. – fadden Nov 16 '16 at 00:50
  • Got it, the RGB values were indeed off. Thanks for all the help! I have learned a lot. – Gooey Nov 16 '16 at 13:40
  • 1
    If I understood your answer correctly we can't really get NV21 directly from camera and instead we have to go through conversions NV21 (from camera device) -> RGBA (camera does that for us before sending pixels to the surface) -> NV21 (convert back to NV21 ourselves)? Or am I missing the point? – Dmitry Zaytsev Apr 26 '17 at 20:42
  • @DmitryZaytsev sure you can get the YUV data from camera session through ImageReader, but not from the screen. – Alex Cohn Aug 25 '20 at 10:38