32

I am using AVCaptureSession to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here.

jv42
  • 8,521
  • 5
  • 40
  • 64
Rahul Juyal
  • 2,124
  • 1
  • 16
  • 33
  • 1
    I am assuming you already are recording MOV/MP4 'chunk' files. You can stream these using an old project I wrote [ffstream](https://github.com/otmakie/LivuLib) (forked by someone). Keep in mind you must maintain a monotonically increasing time. In other words your start time for each MOV/MP4 must be the same. In this way all 1+N files will have 'blank' space at the beginning, and the timestamps generated by mov.c in ffmpeg will be correct. You can use RTSP/RTP over UDP/TCP or use the librtmp plugin for ffmpeg for streaming. – Steve McFarlin Jan 08 '13 at 19:41
  • @SteveMcFarlin Can you help me how to send stream from local to server , it means how to send stream as a chunk. – Rahul Juyal May 08 '13 at 10:37

3 Answers3

27

The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.

Here's the flow:

http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2

And here's some code:

// make input device

NSError *deviceError;

AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];

// make output device

AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];

[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

// initialize capture session

AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];

[captureSession addInput:inputDevice];

[captureSession addOutput:outputDevice];

// make preview layer and add so that camera's view is displayed on screen

AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer    layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];

// go!

[captureSession startRunning];

Then the output device's delegate (here, self) has to implement the callback:

-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection

{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );

CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );

// also in the 'mediaSpecific' dict of the sampleBuffer

   NSLog( @"frame captured at %.fx%.f", imageSize.width, imageSize.height );

    }

Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.

1) Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).

2) Write your own parser for the H.264/AAC output (very hard).

3) Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).

Erik Kaplun
  • 37,128
  • 15
  • 99
  • 111
Jerry Thomsan
  • 1,409
  • 11
  • 9
  • 1
    ,Up to that i already done, thanks for your response, My question is after that how i send audio and video frame from iphone to server. – Rahul Juyal Sep 13 '12 at 11:45
  • 2
    I understand this is old, but I am stuck on the server side of this very topic. How did you configure your server to handle the stream of image frames? – Siriss Jul 19 '13 at 00:04
  • 3
    Page Not Found by developer.apple.com. – AechoLiu Jul 31 '17 at 06:12
4

Look here , and here

Try capturing video using AV Foundation framework. Upload it to your server with HTTP streaming.

Also check out a stack another stack overflow post below

(The post below was found at this link here)

You most likely already know....

 1) How to get compressed frames and audio from iPhone's camera?

You can not do this. The AVFoundation API has prevented this from every angle. I even tried named pipes, and some other sneaky unix foo. No such luck. You have no choice but to write it to file. In your linked post a user suggest setting up the callback to deliver encoded frames. As far as I am aware this is not possible for H.264 streams. The capture delegate will deliver images encoded in a specific pixel format. It is the Movie Writers and AVAssetWriter that do the encoding.

 2) Encoding uncompressed frames with ffmpeg's API is fast enough for
 real-time streaming?

Yes it is. However, you will have to use libx264 which gets you into GPL territory. That is not exactly compatible with the app store.

I would suggest using AVFoundation and AVAssetWriter for efficiency reasons.

Cœur
  • 37,241
  • 25
  • 195
  • 267
SkylerHill-Sky
  • 2,106
  • 2
  • 17
  • 33
4

There is a long and a short story to it.

This is the short one: go look at https://github.com/OpenWatch/H264-RTSP-Server-iOS

this is a starting point.

you can get it and see how he extracts the frame. This is a small and simple project.

Then you can look at kickflip which has a specific function "encodedFrame" its called back onces and encoded frame arrives from this point u can do what you want with it, send via websocket. There is a bunch of very hard code avalible to read mpeg atoms