I'm struggling with this, too. The ffmpeg library seems like it works for compression, but the licensing means you have to release your source code.
You can set your object as a delegate from AVCaptureVideoDataOutput and implement this callback on a dispatch queue:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
Then you'll get the uncompressed video which you can process into a uiimage or jpeg (Apple has code samples for this), but there's no way to get the hardware compressed H264 frames, which is what we really want. This is where you could implement a library like ffmpeg to compress the video into H264 or whatever.
Currently, I'm trying to see if I can interpret the AVAssetWriter file output and redirect that to a stream (it can write hardware-compressed video), but Apple seems to be making this hard for some reason.
Let me know if you find something that works.