3

I wrote a small program using ffmpeg's libraries. which does the following-

1)decode a frame. 2)convert frame to rgb24 . 3)convert rgb24 frame back to yuv420p. 4)encode the yuv420p frame and pack it into video file.

But the end video is not same as the input video. there is some artifacts in final video (horizontal lines).i also get a warning when the rgbToYuv method gets called - Warning: data is not aligned! This can lead to a speedloss

i suspect something is wrong with my format conversion methods because when i comment the coversion steps from my program the output video is identical to input video.

following are my methods -

int VideoFileInstance::convertToRGBFrame(AVFrame **yuvframe,AVFrame **rgbPictInfo) {
    int ret;
    int width = ifmt_ctx->streams[VIDEO_STREAM_INDEX]->codec->width;
    int height = ifmt_ctx->streams[VIDEO_STREAM_INDEX]->codec->height;

    int m_bufferSize = avpicture_get_size(PIX_FMT_RGB24,width, height);

    uint8_t *buffer = (uint8_t *)av_malloc(m_bufferSize);

    //init context if not done already.
    if (imgConvertCtxYUVToRGB == NULL) {
        //init once
        imgConvertCtxYUVToRGB = sws_getContext(width, height, PIX_FMT_YUV420P, width, height, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);

        if(imgConvertCtxYUVToRGB == NULL) {
            av_log(NULL,AV_LOG_ERROR,"error creating img context");
            return -1;
        }

    }


    avpicture_fill((AVPicture*)(*rgbPictInfo), buffer,
                   PIX_FMT_RGB24,
                   width, height);

    uint8_t *inDate[3] = {
        (*yuvframe)->data[0] ,
        (*yuvframe)->data[1] ,
        (*yuvframe)->data[2]
    };

    int destLineSize[1] = {3*width};

    ret = sws_scale(imgConvertCtxYUVToRGB, inDate, (*yuvframe)->linesize, 0, height,
              (*rgbPictInfo)->data, destLineSize);

    av_free(buffer);


    return ret;
}

int VideoFileInstance::convertToYuvFrame (AVFrame **rgbFrame , AVFrame ** yuvFrame) {
    int ret = 0;
    int width = ifmt_ctx->streams[VIDEO_STREAM_INDEX]->codec->width;
    int height = ifmt_ctx->streams[VIDEO_STREAM_INDEX]->codec->height;
    int m_bufferSize = avpicture_get_size(PIX_FMT_YUV420P, width, height);

    uint8_t *buffer = (uint8_t *)av_malloc(m_bufferSize);

    avpicture_fill((AVPicture*)(*yuvFrame), buffer, PIX_FMT_YUV420P,
                   width, height);

    if(imgConvertCtxRGBToYUV == NULL) {
        imgConvertCtxRGBToYUV = sws_getContext(width, height, PIX_FMT_RGB24, width, height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);

        if(imgConvertCtxRGBToYUV == NULL){
            av_log(NULL,AV_LOG_ERROR,"error creating img context");
            return -1;
        }
    }

    avpicture_fill((AVPicture*)(*yuvFrame), buffer,
                   PIX_FMT_YUV420P,
                   width, height);




    sws_scale(imgConvertCtxRGBToYUV,(*rgbFrame)->data , (*rgbFrame)->linesize, 0, height,
              (*yuvFrame)->data , (*yuvFrame)->linesize);

    av_free(buffer);

    return ret;
}

The dimention of input video is 424 X 200. Is there anything wrong with my conversion functions.

gaurav
  • 365
  • 5
  • 14
  • might eb a help http://stackoverflow.com/questions/16667687/how-to-convert-rgb-from-yuv420p-for-ffmpeg-encoder –  Jul 22 '15 at 10:45
  • i have seen that already. My code almost is identical to the one given there. any ideas why i am seeing that warning of data loss ? Also i have one more question after yuv ->rgb ->yuv if i compare the data in yuv frame before and after the conversion should i expect that data to be same ? – gaurav Jul 22 '15 at 11:07
  • No it shouldn't, it should certainly change. By the sounds of that error your buffer is miscalculating and your getting a number at a decimal place and not a full number. Try setting a higher buffer slot than you need and see if you still get that error because it might by that its saying you have x amount and require x amount of slots but then when the math runs its giving out a decimal place and requiring one additional buffer slot. –  Jul 22 '15 at 11:16
  • What environment are you using? –  Jul 22 '15 at 11:16
  • the warning is actually about speed loss .. my bad i wrote data loss in my prev question. i found some old postings regarding the warnings on mailing lists and it seems to be related to linesizes . are there any contraints on linesizes (multiple of 8 or 16 ).I am using xcode. this is c++ code. – gaurav Jul 22 '15 at 11:30
  • yeah linesizes happen as the power of to so multiples of 2 so you have 2,4,8,16,32,64,128,256,512,1024,2048 etc etc –  Jul 22 '15 at 12:01
  • i see . in my case the linesizes are not one of these numbers. for instance for rgbFrame the linesize i am using is 3 * 424(=1272) so should i use 2048 here ? – gaurav Jul 22 '15 at 12:11
  • yeah trying overshooting it's better that it doesnt fill up as opposed to there not being enough –  Jul 22 '15 at 12:34

2 Answers2

1

See https://stackoverflow.com/a/31270501/4726410 second bullet point, avpicture_ and related functions don't guarantee alignment, you need to use av_image_ counterparts with align=16 or align=32.

Community
  • 1
  • 1
Ronald S. Bultje
  • 10,828
  • 26
  • 47
  • in the above answer i am not sure how to use avcodec_align_dimensions2() what value should i pass as the last argument ie linesize_align[] . if i just pass [32] i get modified width and height as 32. – gaurav Jul 22 '15 at 12:49
  • I would recommend to _not_ use avcodec_align_dimensions2(), and instead just use av_image_*() directly, with align=32. That will give you AVFrames with aligned linesize. – Ronald S. Bultje Jul 22 '15 at 13:27
  • thanks. using av_image_alloc fixed the issue for me. posting the correct code for one my methods below. – gaurav Jul 22 '15 at 14:57
1

Using Ronalds suggestion of using av_image* methods fixed the issue for me. following is fixed code for one of the methods.

int VideoFileInstance::convertToRGBFrame(AVFrame **yuvframe,AVFrame **rgbPictInfo) {
    int ret;
    int width = ifmt_ctx->streams[VIDEO_STREAM_INDEX]->codec->width;
    int height = ifmt_ctx->streams[VIDEO_STREAM_INDEX]->codec->height;




    //init context if not done already.
    if (imgConvertCtxYUVToRGB == NULL) {
        //init once
        imgConvertCtxYUVToRGB = sws_getContext(width, height, PIX_FMT_YUV420P, width, height, PIX_FMT_RGB24, SWS_FAST_BILINEAR, 0, 0, 0);

        if(imgConvertCtxYUVToRGB == NULL) {
            av_log(NULL,AV_LOG_ERROR,"error creating img context");
            return -1;
        }

    }


    // call av_freep(rgbPictInfo->data) to free memory

    av_image_alloc( (*rgbPictInfo)->data,   //data to be filled
                   (*rgbPictInfo)->linesize,//line sizes to be filled
                   width, height,
                   PIX_FMT_RGB24,           //pixel format
                   32                       //aling
                   );



    ret = sws_scale(imgConvertCtxYUVToRGB, (*yuvframe)->data, (*yuvframe)->linesize, 0, height,
              (*rgbPictInfo)->data, (*rgbPictInfo)->linesize);


    return ret;
}
gaurav
  • 365
  • 5
  • 14