My app is currently using AVFoundation to take the raw camera data from the rear camera of an iPhone and display it on an AVCaptureVideoPreviewLayer in real time.
My goal is to to conditionally apply simple image filters to the preview layer. The images aren't saved, so I do not need to capture the output. For example, I would like to toggle a setting that converts the video coming in on the preview layer to Black & White.
I found a question here that seems to accomplish something similar by capturing the individual video frames in a buffer, applying the desired transformations, then displaying each frame as an UIImage. For several reasons, this seems like overkill for my project and I'd like to avoid any performance issues this may cause.
Is this the only way to accomplish my goal?
As I mentioned, I am not looking to capture any of the AVCaptureSession's video, merely preview it.