I have modified h264_encoder_impl
to use nvidia grid based hardware encoder. This is done by replacing OpenH264 specific calls with Nvidia API calls. Encoded stream can be written to file successfully but writing _buffer
and _size
of encoded_image_
are not enough and RTPFragmentationHeader
also needs to be filled.
// RtpFragmentize(EncodedImage* encoded_image,
// std::unique_ptr<uint8_t[]>* encoded_image_buffer,
// const VideoFrameBuffer& frame_buffer,
// SFrameBSInfo* info,
// RTPFragmentationHeader* frag_header)
// encode
openh264_->Encode(input, &info /*out*/);
// fragmentize ?
RtpFragmentize(&encoded_image_ /*out*/, &encoded_image_buffer_, *frame_buffer,
&info, &frag_header /*out*/);
// ...
// send
encoded_image_callback_->OnEncodedImage(encoded_image_, &codec_specific, &frag_header);
Current Openh264 based implementation fills frag_header
in RTPFragmentize()
and VP8 fills it differently. I can see something with NAL untis and layers which also calculates encoded_image->_length
but I have no idea how.
I can not find any documentation on it anywhere. VP8 and OpenH264 implementations is all I have.
So what is RTPFragmentationHeader
? what does it do? What is encoded_image->_length
? How to fill it correctly when using custom H264 encoder? I can find startcode but what next? How to fill all its members?