1

I modified some of the code in the time lapse builder class in this link https://github.com/justinlevi/imagesToVideo to take in an array of images instead of url.

Everything seems to be working fine as far as taking the images and creating a video; however, I'm having some issues with the way the images are being displayed in the video.

This is how the images really look and under how they look on iphone: enter image description here

enter image description here

enter image description here

enter image description here

I feel like the issue is in the fillPixelBufferFromImage method:

func fillPixelBufferFromImage(image: UIImage, pixelBuffer: CVPixelBuffer, contentMode:UIViewContentMode){

    CVPixelBufferLockBaseAddress(pixelBuffer, 0)

    let data = CVPixelBufferGetBaseAddress(pixelBuffer)
    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let context = CGBitmapContextCreate(data, Int(self.outputSize.width), Int(self.outputSize.height), 8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace, CGImageAlphaInfo.PremultipliedFirst.rawValue)

    CGContextClearRect(context, CGRectMake(0, 0, CGFloat(self.outputSize.width), CGFloat(self.outputSize.height)))

    let horizontalRatio = CGFloat(self.outputSize.width) / image.size.width
    let verticalRatio = CGFloat(self.outputSize.height) / image.size.height
    var ratio: CGFloat = 1

//    print("horizontal ratio \(horizontalRatio)")
//    print("vertical ratio \(verticalRatio)")
//    print("ratio \(ratio)")
//    print("Image Width -  \(image.size.width). Image Height - \(image.size.height)")


    switch(contentMode) {
    case .ScaleAspectFill:
      ratio = max(horizontalRatio, verticalRatio)
    case .ScaleAspectFit:
      ratio = min(horizontalRatio, verticalRatio)
    default:
      ratio = min(horizontalRatio, verticalRatio)
    }

    //print("after ratio \(ratio)")


    let newSize:CGSize = CGSizeMake(image.size.width * ratio, image.size.height * ratio)

    let x = newSize.width < self.outputSize.width ? (self.outputSize.width - newSize.width) / 2 : 0
    let y = newSize.height < self.outputSize.height ? (self.outputSize.height - newSize.height) / 2 : 0

    //print("x \(x)")
    //print("y \(y)")

    CGContextDrawImage(context, CGRectMake(x, y, newSize.width, newSize.height), image.CGImage)

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
  }

Please advise. Thank you

EDIT: Can someone please help? I'm extremely stuck in getting this images to show in the video as they did when they were taken. Thank you for your help

Walking
  • 467
  • 1
  • 7
  • 23
  • 1
    Every UIImage has an orientation, so you should handle their orientation properly before creating video. check this [SO](http://stackoverflow.com/questions/8915630/ios-uiimageview-how-to-handle-uiimage-image-orientation) – Inhan Feb 29 '16 at 05:37
  • Do you think that flipping it will do the job? Thank you for the help – Walking Feb 29 '16 at 11:19
  • 1
    yes, maybe. and using `UIScreen.mainScreen().scale`, you can get a right scale for screen. it can affect your video's size. – Inhan Feb 29 '16 at 13:07
  • Do you have any advice on getting the images orientation in order to properly flip it? Thanks man – Walking Mar 01 '16 at 00:43

0 Answers0