1

I want to render animated NSView (or just the underlying CALayer) into a series of images without the view being presented on the screen AT ALL. I figured how to do that with CARenderer and MTLTexture but there are some issues with the below approach.

This runs in a playground and stores output to Off-screen Render folder in your downloads:

import AppKit
import Metal
import QuartzCore
import PlaygroundSupport

let view = NSView(frame: CGRect(x: 0, y: 0, width: 600, height: 400))
let circle = NSView(frame: CGRect(x: 0, y: 0, width: 50, height: 50))

circle.wantsLayer = true
circle.layer?.backgroundColor = NSColor.red.cgColor
circle.layer?.cornerRadius = 25
view.wantsLayer = true
view.addSubview(circle)

let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: 600, height: 400, mipmapped: false)
textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget]

let device = MTLCreateSystemDefaultDevice()!
let texture: MTLTexture = device.makeTexture(descriptor: textureDescriptor)!
let context = CIContext(mtlDevice: device)
let renderer = CARenderer(mtlTexture: texture)

renderer.layer = view.layer
renderer.bounds = view.frame

let outputURL: URL = try! FileManager.default.url(for: .downloadsDirectory, in: .userDomainMask, appropriateFor: nil, create: false).appendingPathComponent("Off-screen Render")
try? FileManager.default.removeItem(at: outputURL)
try! FileManager.default.createDirectory(at: outputURL, withIntermediateDirectories: true, attributes: nil)

var frameNumber: Int = 0

func render() {
    Swift.print("Rendering frame #\(frameNumber)…")

    renderer.beginFrame(atTime: CACurrentMediaTime(), timeStamp: nil)
    renderer.addUpdate(renderer.bounds)
    renderer.render()
    renderer.endFrame()

    let ciImage: CIImage = CIImage(mtlTexture: texture)!
    let cgImage: CGImage = context.createCGImage(ciImage, from: ciImage.extent)!
    let url: URL = outputURL.appendingPathComponent("frame-\(frameNumber).png")
    let destination: CGImageDestination = CGImageDestinationCreateWithURL(url as CFURL, kUTTypePNG, 1, nil)!
    CGImageDestinationAddImage(destination, cgImage, nil)
    guard CGImageDestinationFinalize(destination) else { fatalError() }

    frameNumber += 1
}

var timer: Timer?

NSAnimationContext.runAnimationGroup({ context in
    context.duration = 0.25
    view.animator().frame.origin = CGPoint(x: 550, y: 350)
}, completionHandler: {
    timer?.invalidate()
    render()
    Swift.print("Finished off-screen rendering of \(frameNumber) frames in \(outputURL.path)…")
})

// Make the first render immediately after the animation start and after it completes. For the purpose
// of this demo timer is used instead of display link.

render()
timer = Timer.scheduledTimer(withTimeInterval: 1 / 30, repeats: true, block: { _ in render() })

The problems with the above code are shown on the attachment below and are:

  1. The texture doesn't get cleaned and each next frame is drawn on top of the previous render. I'm aware that I can use replace(region:…), but suspect that it's not efficient compared to render pass with clear color description. Is this true? Can render pass be used with CARenderer?

  2. The first frame (in real project it's two-three frames) often comes out empty. I suspect this has to do with some async behaviour in CARenderer rendering or during CGImage construction using Core Image. How can this be avoided? Is there some kind of wait-until-rendering-finished callback on the texture?

enter image description here

Ian Bytchek
  • 8,804
  • 6
  • 46
  • 72
  • 2
    `CARenderer` is quite opaque about what it's doing with the texture and the texture's status. For issue 1, I recommend setting up a render pass descriptor targeting the texture with a load action to clear it, creating a render command encoder, immediately ending the encoder, and committing the command buffer. For issue 2, try creating blit command encoder, using that to encode a synchronize-resource command for the texture, ending it, and committing it. – Ken Thomases May 15 '19 at 19:09
  • Ken, thanks for the input! It's almost working now. I don't see any difference with or without blit and guessing it's not needed, but the first frame always turns out empty. If I setup a timer with zero delay and render from the callback the first frame comes out fine. I'm guessing it has something to do with the CARenderer, would you know how to work around this? The updated code: https://gist.github.com/ianbytchek/7f4168df16b8bc170ef587344b6c1444 – Ian Bytchek May 19 '19 at 12:16
  • Also, I'm a total noob with Metal. Is recreating command buffers and encoders the right way of using them? Is there a more optimized approach without recreating them within the render loop? – Ian Bytchek May 19 '19 at 12:25
  • 1
    Not sure why the first frame is still empty. Hopefully, somebody else will have an idea. As far as your use of Metal, yes, it's correct to create command buffers and encoders for each frame. – Ken Thomases May 19 '19 at 20:49

2 Answers2

2

After speaking with Apple Developer Technical Support it appears that:

Core Image defers the rendering until the client requests the access to the frame buffer, i.e. CVPixelBufferLockBaseAddress.

So, the solution is simply to do CVPixelBufferLockBaseAddress after calling CIContext.render as shown below:

for frameNumber in 0 ..< frameCount {
    var pixelBuffer: CVPixelBuffer?
    guard let pixelBufferPool: CVPixelBufferPool = pixelBufferAdaptor.pixelBufferPool else { preconditionFailure() }
    precondition(CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &pixelBuffer) == kCVReturnSuccess)

    let ciImage = CIImage(cgImage: frameImage)
    context.render(ciImage, to: pixelBuffer!)

    precondition(CVPixelBufferLockBaseAddress(pixelBuffer!, []) == kCVReturnSuccess)
    defer { precondition(CVPixelBufferUnlockBaseAddress(pixelBuffer!, []) == kCVReturnSuccess) }

    let bytes = UnsafeBufferPointer(start: CVPixelBufferGetBaseAddress(pixelBuffer!)!.assumingMemoryBound(to: UInt8.self), count: CVPixelBufferGetDataSize(pixelBuffer!))
    precondition(bytes.contains(where: { $0 != 0 }))

    while !input.isReadyForMoreMediaData { Thread.sleep(forTimeInterval: 10 / 1000) }
    precondition(pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: CMTime(seconds: Double(frameNumber) * frameRate, preferredTimescale: 600)))
}

P.S. This is the same answer for Making CIContext.render(CIImage, CVPixelBuffer) work with AVAssetWriter question – you might want to check it out for more insight where and how this issue may occur while working with AVFoundation. Though, the question is different, the solution is exactly the same.

Ian Bytchek
  • 8,804
  • 6
  • 46
  • 72
  • Can you maybe elaborate where exactly to change the code you posted in your original question to make it work, as there is no CVPixelBuffer involved, so I am not sure what change exactly would be necessary. – ePirat Jul 07 '22 at 21:38
1

I think you can use AVVideoCompositionCoreAnimationTool for rendering view with animations.

Sound Blaster
  • 4,778
  • 1
  • 24
  • 32
  • This would be a good idea when you need to output the animation into an AVFoundation-supported video format, otherwise it's a very purpose-specific solution. The documentation describes it as `An object used to incorporate Core Animation into a video composition.` I'd imagine it still uses the `CARenderer` behind the scenes. – Ian Bytchek Sep 09 '20 at 14:44
  • But also works on iOS – not only on macOS. From resulted video you can extract any frame what you want. – Sound Blaster Sep 10 '20 at 13:45
  • You sure can, but would be an overkill if you just need the render here and now. Besides, it would be hard to preserve the image without lossless output, which would be huge. Encoding with ProRes is probably a good choice, but again, the file size will be massive. Just saw that `CARenderer` is only available on macOS, but I think there was something else more fit for the job. I see you from Petrozavodsk? :) Drop me a line? https://t.me/ianbytchek – Ian Bytchek Sep 10 '20 at 17:42