I saw this answer, Libav (ffmpeg) copying decoded video timestamps to encoder
But I still don't understand why we need both the stream timebase and the codec timebase. Currently I'm trying to write some code that determines the time at which a frame is shown in a video from my decoder so I think the right way to do that is like this
aVFrame.best_effort_timestamp * stream.time_base.num * stream.time_base.den
is that correct?