3

So I already know there are existing questions on StackOverflow about streaming live video on ios devices to a server but I'm going to ask about the encoding portion.

I'm a bit lost on which software encoding services are available to encode the raw (live) video footage from an ios device to send to a server like AWS. After using AVCam to capture videos, would I use something like ffmpeg to encode the raw video on the fly and then send the encoded video to the server using HTTP or RTSP? Or do I have the concepts wrong?

Ravi Gautam
  • 960
  • 2
  • 9
  • 20
abcf
  • 685
  • 2
  • 10
  • 25

3 Answers3

3

As of iOS8 access to the hardware backed encoder/decoder api is available. To get an overview of how this can be done watch the WWDC 2014 session on "Direct Access to Video Encoding and Decoding". In a nutshell you get the pixel buffers from the camera, feed them to the encoder and you get encoded blocks back that you can then feed on in turn to whatever library you are using for the network streaming. You might have to do some data conversion before you can just go and use it. As far as I remember ffmpeg does have rtsp support, so you should be able to use that after getting access to the encoded frames...

Christian
  • 113
  • 1
  • 4
2

The iOS device captures the video in a MOV or MP4.

It is possible to capture raw data (YUV, BGRA etc.) from the iOS device camera using AVFoundation (see this and this and many others).

However it is not efficient to manually encode the raw data since apparently the only way to use hardware-acceleration on the device is by going trough AVAssetWrite which will output a file.

There is a way to achieve live streaming by reading back the file and packetize it using protocols such as RTSP and RTMP, but it's not very straight-forward.

There are a few implementations you can check out like:

http://www.gdcl.co.uk/2013/02/20/iOS-Video-Encoding.html for RTSP

and

Streaming support library for the Livu iPhone application for RTMP (older lib for Livu)

The writer of the Livu App is also on StackOverflow, check his Q&A: https://stackoverflow.com/users/227021/steve-mcfarlin

Community
  • 1
  • 1
aergistal
  • 29,947
  • 5
  • 70
  • 92
1

The video is already encoded when it is stored on the iOS device - encoding is just a way of digitally representing the video, in most cases capturing some values to represent the color and brightness etc of each pixel in each frame of the video.

Most encoding also includes techniques to compress the video to conserve space. These techniques include using some of the frames as a reference for following frames (and in come cases preceding frames). For example the first frame might be a reference frame (commonly called an I frame) and for the following five frames, instead of storing all the pixel data, only pixels which have changes are stored. It is easy to understand how this might save a lot of storage, particularly for any scenes where there is little movement or change in the scenes.

As a general rule, you lose some quality when you compress and the more you compress the more quality you lose. On iOS you can specify the quality level you want when you are setting up the video capture - see 'Configuring a Session' in the line below:

Unless you need to store locally in a different quality level than you are sending to the server, you should just record in the quality level you want and avoid having to compress on the phone as it is quite a bit of work for the phone to do, if you can avoid it. if you do want to have a higher quality stored on the phone and lower quality (to speed up transmission or save bandwidth) to send to the server, take a look at this answer:

Community
  • 1
  • 1
Mick
  • 24,231
  • 1
  • 54
  • 120
  • don't know if this is a dumb question but what if I didn't want to store it on the ios but automatically send the live stream to the server via HTTP or RSTP? – abcf Mar 20 '15 at 23:19
  • @abcf I think our posts crossed... See bottom of the updated answer - in simple the terms select the quality level you want when recording in this case to avoid having to do any extra work compressing on the device before you send your stream out. – Mick Mar 20 '15 at 23:30
  • @Mick, he needs to live-stream it not just upload a file to the server. – aergistal Mar 21 '15 at 06:37
  • 1
    @aergistal - I guess what I was trying to say is that, unless there is a need for different quality formats, it may be best to try to use a system or framework supported capture+encoding that already meets the quality you require (regardless of whether you want to stream it or store it to file) as it leverages the HW and avoids extra transcoding on the client. There is a very good discussion on this on iOS at this blog post: http://blog.denivip.ru/index.php/2013/10/how-to-live-stream-video-as-you-shoot-it-in-ios/?lang=en. Take a look also at the note from the Dailymotion commenter at the end. – Mick Mar 21 '15 at 23:06
  • @Mick - really interesting article. There also other variables like latency to take into account. The Dailymotion way is interesting and maintainable but it takes at least 3s + the transport time of the first packet to start the live stream. – aergistal Mar 22 '15 at 10:28