We are currently working on a chat + (file sharing +) video conference application using HTML5 websockets. To make our application more accessible we want to implement Adaptive Streaming, using the following sequence:
- Raw audio/video data client goes to server
- Stream is split into 1 second chunks
- Encode stream into varying bandwidths
- Client receives manifest file describing available segments
- Downloads one segment using normal HTTP
- Bandwidth next segment chosen on performance of previous one
- Client may select from a number of different alternate streams at a variety of data rates
So.. How do we split our audio/video data in chunks with Python?
We know Microsoft already build the Expression Encoder 2 which enables Adaptive Streaming, but it only supports Silverlight and that's not what we want.
Edit:
There's also an solution called FFmpeg (and for Python a PyFFmpeg wrapper), but it only supports Apple Adaptive streaming.