0

I saw this answer, Libav (ffmpeg) copying decoded video timestamps to encoder

But I still don't understand why we need both the stream timebase and the codec timebase. Currently I'm trying to write some code that determines the time at which a frame is shown in a video from my decoder so I think the right way to do that is like this

aVFrame.best_effort_timestamp * stream.time_base.num * stream.time_base.den is that correct?

Daniel Kobe
  • 9,376
  • 15
  • 62
  • 109

1 Answers1

2

"why we need both" is a loaded statement. We don't NEED both. Your question should be why do we HAVE both.

This is not an ffmpeg/libav invention, it is a side effect of how media files work. Some (but not all) codecs have a mechanism for encoding a time base into the codec bitstream (for example h.264). These bitstreams can then be written/muxed to a container (for example mp4) that also encodes a timebase. In theory these should match, but in practice they often do not. libav is just parsing the file and populating the structs with what is there.

szatmary
  • 29,969
  • 8
  • 44
  • 57