0

Good morning.

I am making a camera video player using ffmpeg.

During the production process, we are confronted with one problem.

If you take one frame through ffmpeg, decode the frame, and sws_scale it to fit the screen size, it will take too long and the camera image will be burdened.

For example, when the incoming input resolution is 1920 * 1080, and the resolution of my phone is 2550 * 1440, the speed of sws_scale is about 6 times slower. [Contrast when changing to the same size]

Currently, the NDK converts sws_scale to the resolution that was input from the camera, so the speed is improved and the image is not interrupted.

However, SurfaceView is full screen, but input resolution is below full resolution.

Scale AVFrame

ctx->m_SwsCtx = sws_getContext(
        ctx->m_CodecCtx->width,
        ctx->m_CodecCtx->height,
        ctx->m_CodecCtx->pix_fmt,
        //width,                  // 2550 (SurfaceView)
        //height,                 // 1440
        ctx->m_CodecCtx->width,   // 1920 (Camera)
        ctx->m_CodecCtx->height,  // 1080
        AV_PIX_FMT_RGBA,
        SWS_FAST_BILINEAR,
        NULL, NULL, NULL);
if(ctx->m_SwsCtx == NULL)
{
    __android_log_print(
        ANDROID_LOG_DEBUG,
        "[ VideoStream::SetResolution Fail ] ",
        "[ Error Message : %s ]",
            "SwsContext Alloc fail");

    SET_FIELD_TO_INT(pEnv, ob, err, 0x40);

    return ob;
}

sws_scale(
    ctx->m_SwsCtx,
    (const uint8_t * const *)ctx->m_SrcFrame->data,
    ctx->m_SrcFrame->linesize,
    0,
    ctx->m_CodecCtx->height,
    ctx->m_DstFrame->data,
    ctx->m_DstFrame->linesize);

    PDRAWOBJECT drawObj = (PDRAWOBJECT)malloc(sizeof(DRAWOBJECT));
    if(drawObj != NULL)
    {
        drawObj->m_Width = ctx->m_Width;
        drawObj->m_Height = ctx->m_Height;
        drawObj->m_Format = WINDOW_FORMAT_RGBA_8888;
        drawObj->m_Frame = ctx->m_DstFrame;

        SET_FIELD_TO_INT(pEnv, ob, err, -1);
        SET_FIELD_TO_LONG(pEnv, ob, addr, (jlong)drawObj);
    }

Draw SurfaceView;

PDRAWOBJECT d = (PDRAWOBJECT)drawObj;

long long curr1 = CurrentTimeInMilli();

ANativeWindow *window = ANativeWindow_fromSurface(pEnv, surface);
ANativeWindow_setBuffersGeometry(window, 0, 0, WINDOW_FORMAT_RGBA_8888);

ANativeWindow_setBuffersGeometry(
        window,
        d->m_Width,
        d->m_Height,
        WINDOW_FORMAT_RGBA_8888);

ANativeWindow_Buffer windowBuffer;

ANativeWindow_lock(window, &windowBuffer, 0);

uint8_t * dst = (uint8_t*)windowBuffer.bits;
int  dstStride = windowBuffer.stride * 4;
uint8_t * src = (uint8_t*) (d->m_Frame->data[0]);
int srcStride = d->m_Frame->linesize[0];

for(int h = 0; h < d->m_Height; ++h)
{
    // Draw SurfaceView;
    memcpy(dst + h * dstStride, src + h * srcStride, srcStride);
}

ANativeWindow_unlockAndPost(window);
ANativeWindow_release(window);

Result;

enter image description here

I would like to change the whole screen from full screen to full screen. Is there a way to change the size of a SurfaceView in NDK or Android, rather than sws_scale?

Thank you.

Community
  • 1
  • 1
심상원
  • 65
  • 7

1 Answers1

0

You don't need to scale your video. Actually, you don't even need to convert it to RGB (this is also a significant burden for the CPU).

The trick is to use OpenGL render with a shader that takes YUV input and displays this texture scaled tho your screen.

Start with this solution (reusing code from Android system): https://stackoverflow.com/a/14999912/192373

Alex Cohn
  • 56,089
  • 9
  • 113
  • 307
  • hi sir..i have a similar problem, can you please check it out?https://stackoverflow.com/questions/57724925/how-to-convert-yuv420sp-to-rgb-and-display-it – markhamknight Sep 05 '19 at 07:45