1

I'm making a conversion in the native method from imagereader to yug format to Rgba format through this method in NDK:

size_t bufferSize = buffer.width * buffer.height * (size_t)4;
uint8_t * outPtr = reinterpret_cast<uint8_t *>(buffer.bits);
for (size_t y = 0; y < srcHeight; y++)
{
    uint8_t * Y_rowPtr = srcYPtr + y * Y_rowStride;
    uint8_t * U_rowPtr = srcUPtr + (y >> 1) * U_rowStride;
    uint8_t * V_rowPtr = srcVPtr + (y >> 1) * V_rowStride;

    for (size_t x = 0; x < srcWidth; x++)
    {
        uint8_t Y = Y_rowPtr[x];
        uint8_t U = U_rowPtr[(x >> 1)];
        uint8_t V = V_rowPtr[(x >> 1)];

        double R = (Y + (V - 128) * 1.40625);
        double G = (Y - (U - 128) * 0.34375 - (V - 128) * 0.71875);
        double B = (Y + (U - 128) * 1.765625);

        *(outPtr + (--bufferSize)) = 255; // gamma for RGBA_8888
        *(outPtr + (--bufferSize)) = (uint8_t) (B > 255 ? 255 : (B < 0 ? 0 : B));
        *(outPtr + (--bufferSize)) = (uint8_t) (G > 255 ? 255 : (G < 0 ? 0 : G));
        *(outPtr + (--bufferSize)) = (uint8_t) (R > 255 ? 255 : (R < 0 ? 0 : R));

    }
}

Why is the image rotated 90 degrees?

UPDATE:

I use this conversion: https://www.fourcc.org/fccyvrgb.php but the image remains rotated from the original of 90.

UPDATE2:

       @Override
    public void onImageAvailable(ImageReader reader) {
        // ottiene il nuovo frame
        Image image = reader.acquireNextImage();

        if (image == null) {
            return;
        }


        //preparazione per RGBA output
        Image.Plane Y_plane = image.getPlanes()[0];
        int Y_rowStride = Y_plane.getRowStride();
        Image.Plane U_plane = image.getPlanes()[1];
        int UV_rowStride = U_plane.getRowStride();  //nelle immagini YUV, uPlane.getRowStride() == vPlane.getRowStride()
        Image.Plane V_plane = image.getPlanes()[2];
        JNIUtils.RGBADisplay(image.getWidth(), image.getHeight(), Y_rowStride, Y_plane.getBuffer(), UV_rowStride, U_plane.getBuffer(), UV_rowStride, V_plane.getBuffer(), surface);


        image.close();
    }

UPDATE 3: Code into native.cpp

Java_com_ndkvideoimagecapture_JNIUtils_RGBADisplay(
    JNIEnv *env, //env per consentire il passaggio di dati per riferimento
    jobject obj,
    jint srcWidth,
    jint srcHeight,
    jint Y_rowStride,
    jobject Y_Buffer,
    jint U_rowStride,
    jobject U_Buffer,
    jint V_rowStride,
    jobject V_Buffer,
    jobject surface) {

uint8_t *srcYPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(Y_Buffer));
uint8_t *srcUPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(U_Buffer));
uint8_t *srcVPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(V_Buffer));

ANativeWindow *window = ANativeWindow_fromSurface(env, surface);
ANativeWindow_acquire(window);
ANativeWindow_Buffer buffer;

ANativeWindow_setBuffersGeometry(window, srcWidth, srcHeight, WINDOW_FORMAT_RGBA_8888);

if (int32_t err = ANativeWindow_lock(window, &buffer, NULL)) {
    LOGE("ANativeWindow_lock failed with error code: %d\n", err);
    ANativeWindow_release(window);
}

    size_t bufferSize = buffer.width * buffer.height * (size_t)4;
uint8_t * outPtr = reinterpret_cast<uint8_t *>(buffer.bits);
for (size_t y = 0; y < srcHeight; y++)
{
    uint8_t * Y_rowPtr = srcYPtr + y * Y_rowStride;
    uint8_t * U_rowPtr = srcUPtr + (y >> 1) * U_rowStride;
    uint8_t * V_rowPtr = srcVPtr + (y >> 1) * V_rowStride;

    for (size_t x = 0; x < srcWidth; x++)
    {
        uint8_t Y = Y_rowPtr[x];
        uint8_t U = U_rowPtr[(x >> 1)];
        uint8_t V = V_rowPtr[(x >> 1)];

        double R = (Y + (V - 128) * 1.40625);
        double G = (Y - (U - 128) * 0.34375 - (V - 128) * 0.71875);
        double B = (Y + (U - 128) * 1.765625);

        *(outPtr + (--bufferSize)) = 255; // gamma for RGBA_8888
        *(outPtr + (--bufferSize)) = (uint8_t) (B > 255 ? 255 : (B < 0 ? 0 : B));
        *(outPtr + (--bufferSize)) = (uint8_t) (G > 255 ? 255 : (G < 0 ? 0 : G));
        *(outPtr + (--bufferSize)) = (uint8_t) (R > 255 ? 255 : (R < 0 ? 0 : R));

    }
}

ANativeWindow_unlockAndPost(window);
ANativeWindow_release(window);
}
Enda Azi
  • 49
  • 10
  • Rotated compared to what? If you use the camera in portrait orientation, you can setup the live preview correctly. But the byte[] returned to onPreviewFrame() is always in the original landscape order. – Alex Cohn Dec 05 '17 at 16:00
  • The image in to conversion change orientation, not the app. – Enda Azi Dec 05 '17 at 16:13
  • Save your original YUV image to disk, and open it as a grayscale image in a picture viewer on your PC. You will see that it is already 'rotated'. – Alex Cohn Dec 05 '17 at 16:51
  • The goal is to see the image in the preview camera without saving photos in to disk. – Enda Azi Dec 06 '17 at 09:34
  • Wait a sec, do you convert to RGB for live preview? *Please understand that I wrote about saving the YUV buffer to disk only for debugging purposes*. – Alex Cohn Dec 06 '17 at 09:49
  • I was converting the frame in live preview from YUV to RGBA. How can I do what you ask? – Enda Azi Dec 06 '17 at 10:13
  • Please explain in more detail where you get the YUV_420_888 **src?Ptr**'s. I don't understand why you fill your **outPtr** backwards, but this will flip the output, not rotate by 90°. If your image *is* rotated 90°, you should see that the actual **width** is the expected **height**, and vice versa. – Alex Cohn Dec 06 '17 at 12:38
  • 1
    I inserted code in update into first post – Enda Azi Dec 06 '17 at 14:27

1 Answers1

3

I suggest that you read this related discussion (there are nice screenshots, too, don't miss them).

TL;NR: ImageReader always returns a landscape image, same as onPreviewFrame().

If you connect your camera to a surface of texture, display will be much more efficient than YUV➤RGB conversion that your program can perform, but I understand that there are many situations when you need the RGB data for image processing, and sometimes you want the result of image processing to be displayed as live preview.

This, essentially, is the way OpenCV handles the Android camera and live preview (unfortunately official OpenCV does not use camera2 API, but I have found one tutorial that shows how this can be done).

I strongly suggest to use Renderscript for YUV➤RGB conversion. It's not only much faster, it also gives a significant battery save over doing it in CPU.

An entirely different issue with your code is that it assumes that the display window aspect ratio is same as the image that you receive from the camera, doesn't it?

You should not rely on this. Even after you fix 90° rotation (if the window is portrait), you can see the image distorted. This is not special for camera2, same can happen with the deprecated camera API.

In your case, though, the solution can be different. Instead of tuning the geometry of the window you use to display the live preview, you can skip some input pixels to keep the aspect ratio correct.

In the nutshell, you need

float srcAspectRatio = (float)srcWidth/srcHeight;
float outAspectRatio = (float)buffer.width/buffer.height;

int clippedSrcWidth =  outAspectRatio > srcAspectRatio ? srcWidth : (int)(0.5 + srcHeight*outAspectRatio);
int clippedSrcHeight = outAspectRatio < srcAspectRatio ? srcHeight : (int)(0.5 + srcWidth/outAspectRatio);

ANativeWindow_setBuffersGeometry(window, clippedSrcWidth, clippedSrcHeight, WINDOW_FORMAT_RGBA_8888);

and

for (size_t y = (srcHeight-clippedSrcHeight)/2; y < srcHeight - (srcHeight-clippedSrcHeight)/2; y++)

and so on.

And using the back counter to access pixels of the outPtr, you probably perform mirroring expected for the front-facing camera, don't you?

Alex Cohn
  • 56,089
  • 9
  • 113
  • 307