2

I Create Video from Images, Video lost some frame because isReadyForMoreMediaData not ready some time. When I debug I saw the reason because of the loop, it need some delay time to initiation the next buffer but I don't know how to do that.

{
           for nextDicData in self.selectedPhotosArray{      
                if (videoWriterInput.isReadyForMoreMediaData) {

                    if let nextImage = nextDicData["img"] as? UIImage
                    {
                        var frameDuration = CMTimeMake(Int64(0), fps)
                        if let timeVl = nextDicData["time"] as? Float{
                               framePerSecond = Int64(timeVl * 1000)
                            print("TIME FRAME : \(timeVl)")

                        }else{
                             framePerSecond = Int64(0.1 * 1000)
                        }

                        frameDuration =  CMTimeMake(framePerSecond ,fps)
                        let lastFrameTime = CMTimeMake(Int64(lastTimeVl), fps)
                        let presentationTime = CMTimeAdd(lastFrameTime, frameDuration)
                        var pixelBuffer: CVPixelBuffer? = nil
                        let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferAdaptor.pixelBufferPool!, &pixelBuffer)
                        if let pixelBuffer = pixelBuffer, status == 0 {
                            let managedPixelBuffer = pixelBuffer
                            CVPixelBufferLockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
                            let data = CVPixelBufferGetBaseAddress(managedPixelBuffer)
                            let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
                            let context = CGContext(data: data, width: Int(self.outputSize.width), height: Int(self.outputSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(managedPixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
                            context!.clear(CGRect(x: 0, y: 0, width: CGFloat(self.outputSize.width), height: CGFloat(self.outputSize.height)))
                            let horizontalRatio = CGFloat(self.outputSize.width) / nextImage.size.width
                            let verticalRatio = CGFloat(self.outputSize.height) / nextImage.size.height
                            //let aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
                            let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
                            let newSize: CGSize = CGSize(width: nextImage.size.width, height: nextImage.size.height)
                            let x = newSize.width < self.outputSize.width ? (self.outputSize.width - newSize.width) / 2 : 0
                            let y = newSize.height < self.outputSize.height ? (self.outputSize.height - newSize.height) / 2 : 0

                            context?.draw(nextImage.cgImage!, in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
                            CVPixelBufferUnlockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
                            appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)


                        } else {
                            print("Failed to allocate pixel buffer")
                            appendSucceeded = false
                        }
                    }

                }else{
                    //not ready
                       print("write is Not Raady: \(lastTimeVl)")
                }
                if !appendSucceeded {
                    break
                }
                frameCount += 1
                lastTimeVl += framePerSecond
                print("LAST TIME : \(lastTimeVl)")


            }
Marino
  • 41
  • 7

2 Answers2

2

The AVAssetWriterInput can help you manage it and let you know when isReadyForMoreMediaData is true again, by calling requestMediaDataWhenReady.

Here is an example from apple documentation (translated to Swift):

myAVAssetWriterInput.requestMediaDataWhenReady(on: queue) {
    while myAVAssetWriterInput.isReadyForMoreMediaData {
        let nextSampleBuffer = copyNextSampleBufferToWrite()
        if let nextSampleBuffer = nextSampleBuffer { 
            // you have another frame to add
            myAVAssetWriterInput.append(nextSampleBuffer)
        } else { 
            // finished to add frames
            myAVAssetWriterInput.markAsFinished()
            break
        }
    }
})

Now, when the writer gets not ready "suddenly", no worries, it will continue adding frames on the next requestMediaDataWhenReady callback.

Developeder
  • 1,579
  • 18
  • 29
-1

swift 5

Add usleep after pixelBufferAdaptor.append.

Reason for adding sleep function is that when there are multiple inputs, AVAssetWriter tries to write media data in an interleaving pattern, so writer must be ready for next input(image in your case) in order to append data. Waiting for some time makes it ready for next input.link

 appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
 //VideoWriterInput must be paused for atleast 50 milliseconds or the buffer wont be ready to append new frame
 usleep(useconds_t(50000) )
farazBhatti
  • 11
  • 1
  • 8