3

I'm getting frames from my camera in the following way:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    guard let imageBuffer: CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
}

From the Apple documentation ...

If you need to reference the CMSampleBuffer object outside of the scope of this method, you must CFRetain it and then CFRelease it when you are finished with it. To maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible. If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped.

Is it okay to hold a reference to CVImageBuffer without explicitly setting sampleBuffer = nil? I only ask because the latest version of Swift automatically memory manages CF data structures so CFRetain and CFRelease are not available.

Also, what is the reasoning behind "This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible." ? Why would a memory block be copied in the first place?

Carpetfizz
  • 8,707
  • 22
  • 85
  • 146

1 Answers1

1

Is it okay to hold a reference to CVImageBuffer without explicitly setting sampleBuffer = nil?

If you're going to keep a reference to the image buffer, then keeping a reference to its "containing" CMSampleBuffer definitely cannot hurt. Will the "right thing" be done if you keep a reference to the CVImageBuffer but not the CMSampleBuffer? Maybe.

Also, what is the reasoning behind "This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible." ? Why would a memory block be copied in the first place?

There are questions on SO about how to do a deep copy on an image CMSampleBuffer, and the answers are not straightforward, so the chances of unintentionally copying one's memory block are very low. I think the intention of this documentation is to inform you that AVCaptureVideoDataOutput is efficient! and that this efficiency (via fixed size frame pools) can have the surprising side effect of dropped frames if you hang onto too many CMSampleBuffers for too long, so don't do that.

The warning is slightly redundant however, because even without the spectre of dropped frames, uncompressed video CMSampleBuffers are already a VERY hot potato due to their size and frequency. You only need to reference a few seconds' worth to use up gigabytes of RAM, so it is imperative to process them as quickly possible and then release/nil any references to them.

Rhythmic Fistman
  • 34,352
  • 5
  • 87
  • 159
  • Thanks. How do I decide if I should get rid of the sample buffer? – Carpetfizz Mar 30 '18 at 18:45
  • What exactly are you doing with them? – Rhythmic Fistman Mar 30 '18 at 19:41
  • I would like to send the captured image over a network while doing as little work as possible on the device. So, I thought, instead of converting it to a CIImage on the device itself, I just send the buffer over the network and decode it on the other end (a Mac app) – Carpetfizz Mar 30 '18 at 19:43
  • Just one frame or a stream of them? They're pretty big, even if it's only one you should compress them. It's a bit of a rabbit hole. If it's a stream, look at the comments in this answer - the asker is using `webRTC` plus `apprtc-ios` to do a video call type app – Rhythmic Fistman Mar 31 '18 at 09:05
  • A stream of them – Carpetfizz Mar 31 '18 at 09:08