4

I have used the following method iOS4: how do I use video file as an OpenGL texture? to get video frames rendering in openGL successfully.

This method however seems to fall down when you want to scrub (jump to a certain point in the playback) as it only supplies you with video frames sequentially.

Does anyone know a way this behaviour can successfully be achieved?

Community
  • 1
  • 1
RyanSullivan
  • 635
  • 1
  • 5
  • 19
  • 1
    Stephen suggests using the `timeRange` property in his answer to a similar question here: http://stackoverflow.com/a/5508955/19679 , but I don't think that can be used without recreating your AVAssetReader. – Brad Larson May 02 '12 at 20:21
  • 1
    AVAssetReader is "one-shot" and must be recreated in order to read from a new position (requires login to read): https://devforums.apple.com/message/383762 – Rhythmic Fistman May 03 '12 at 08:01
  • 1
    You do indeed need to recreate the AVAssetReader, this seems to happen fast enough for it to not really have a negative impact on the application – RyanSullivan May 14 '12 at 11:26

1 Answers1

-1

One easy way to implement this is to export the video to a series of frames, store each frame as a PNG, and then "scrub" by seeing to a PNG at a specific offset. That gives you random access in the image stream at the cost of decoding the entire video first and holding all the data on disk. This would also involve decoding each frame as it is accessed, that would eat up CPU but modern iPhones and iPads can handle it as long as you are not doing too much else.

MoDJ
  • 4,309
  • 2
  • 30
  • 65
  • 1
    I don't know if you have done the math, but PNGs are basically uncompressed if they are of natural scenes, so a 10 min 1024x1024 video would be 10min*60secs*30fps*1024*1024*4/1e9 - i.e. 75 GB you would have to store somewhere. So that's not going to happen. – Tom Andersen Apr 05 '15 at 22:08