This question is different that in Ios Xcode Message from debugger: Terminated due to memory issue . I am using different device and my app is being killed in foreground, besides that I cannot use Instruments to see allocations.
I am trying to merge short intervals of many AVAssets into one video file. I need to apply additional filters and transformations on them.
I implemented classes, which can take one asset and make everything exactly as I want, but now, when I try to do the same thing with many (cca 7 aasets is still ok) shorter assets (complete duration could be even shorter then with one asset), the application crashes and I get only "Message from debugger: Terminated due to memory issue" log.
I cannot event use most of Instruments tools, because the application crashes immediately with them. I tried many things to solve it, but I was unsuccessful and I would really appreciate some help.
Thank you
Relevant code snippets are here:
Creation of composition:
func export(toURL url: URL, callback: @escaping (_ url: URL?) -> Void){
var lastTime = kCMTimeZero
var instructions : [VideoFilterCompositionInstruction] = []
let composition = AVMutableComposition()
composition.naturalSize = CGSize(width: 1080, height: 1920)
for (index, assetURL) in assets.enumerated() {
let asset : AVURLAsset? = AVURLAsset(url: assetURL)
guard let track: AVAssetTrack = asset!.tracks(withMediaType: AVMediaType.video).first else{callback(nil); return}
let range = CMTimeRange(start: CMTime(seconds: ranges[index].lowerBound, preferredTimescale: 1000),
end: CMTime(seconds: ranges[index].upperBound, preferredTimescale: 1000))
let videoTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)!
let audioTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)!
do{try videoTrack.insertTimeRange(range, of: track, at: lastTime)}
catch _{callback(nil); return}
if let audio = asset!.tracks(withMediaType: AVMediaType.audio).first{
do{try audioTrack.insertTimeRange(range, of: audio, at: lastTime)}
catch _{callback(nil); return}
}
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
layerInstruction.trackID = videoTrack.trackID
let instruction = VideoFilterCompositionInstruction(trackID: videoTrack.trackID,
filters: self.filters,
context: self.context,
preferredTransform: track.preferredTransform,
rotate : false)
instruction.timeRange = CMTimeRange(start: lastTime, duration: range.duration)
instruction.layerInstructions = [layerInstruction]
instructions.append(instruction)
lastTime = lastTime + range.duration
}
let videoComposition = AVMutableVideoComposition()
videoComposition.customVideoCompositorClass = VideoFilterCompositor.self
videoComposition.frameDuration = CMTimeMake(1, 30)
videoComposition.renderSize = CGSize(width: 1080, height: 1920)
videoComposition.instructions = instructions
let session: AVAssetExportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)!
session.videoComposition = videoComposition
session.outputURL = url
session.outputFileType = AVFileType.mp4
session.exportAsynchronously(){
DispatchQueue.main.async{
callback(url)
}
}
and part of AVVideoCompositing class:
func startRequest(_ request: AVAsynchronousVideoCompositionRequest){
autoreleasepool() {
self.getDispatchQueue().sync{
guard let instruction = request.videoCompositionInstruction as? VideoFilterCompositionInstruction else{
request.finish(with: NSError(domain: "jojodmo.com", code: 760, userInfo: nil))
return
}
guard let pixels = request.sourceFrame(byTrackID: instruction.trackID) else{
request.finish(with: NSError(domain: "jojodmo.com", code: 761, userInfo: nil))
return
}
var image : CIImage? = CIImage(cvPixelBuffer: pixels)
for filter in instruction.filters{
filter.setValue(image, forKey: kCIInputImageKey)
image = filter.outputImage ?? image
}
let newBuffer: CVPixelBuffer? = self.renderContext.newPixelBuffer()
if let buffer = newBuffer{
instruction.context.render(image!, to: buffer)
request.finish(withComposedVideoFrame: buffer)
}
else{
request.finish(withComposedVideoFrame: pixels)
}
}
}