I am using the code below from this answer to draw on a UIImage:
func drawOnImage(_ image: UIImage) -> UIImage {
// Create a context of the starting image size and set it as the current one
UIGraphicsBeginImageContext(image.size)
// Draw the starting image in the current context as background
image.draw(at: CGPoint.zero)
// Get the current context
let context = UIGraphicsGetCurrentContext()!
// Draw a red line
context.setLineWidth(2.0)
context.setStrokeColor(UIColor.red.cgColor)
context.move(to: CGPoint(x: 100, y: 100))
context.addLine(to: CGPoint(x: 200, y: 200))
context.strokePath()
// Draw a transparent green Circle
context.setStrokeColor(UIColor.green.cgColor)
context.setAlpha(0.5)
context.setLineWidth(10.0)
context.addEllipse(in: CGRect(x: 100, y: 100, width: 100, height: 100))
context.drawPath(using: .stroke) // or .fillStroke if need filling
// Save the context as a new UIImage
let myImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// Return modified image
return myImage
}
I am calling this function to draw an overlay image on every frame of a video being read from AVAssetReader (the video is turned into a sequence of UIImages). This works great for videos that were captured with the iPhone camera.
However, with videos taken from other camera sources, the drawing function above produces unexpected results (i.e. the overlay drawings are rotated, and it seems like they are being scaled in some cases as well).
I have read about a property of AVAssetTrack called preferredTransform, and indeed, when I compare the preferredTransform of the iPhone vs non-iPhone recordings, the underlying values are different:
Non-iPhone recording preferredTransform matrix:
Optional<CGAffineTransform>
▿ some : CGAffineTransform
- a : 1.0
- b : 0.0
- c : 0.0
- d : 1.0
- tx : 0.0
- ty : 0.0
iPhone recording preferredTransform matrix:
▿ Optional<CGAffineTransform>
▿ some : CGAffineTransform
- a : 0.0
- b : 1.0
- c : -1.0
- d : 0.0
- tx : 1080.0
- ty : 0.0
However, it seems odd to me that this preferredTransform should affect the above drawing function, since it only takes a UIImage, and preferredTransform is not a property of a UIImage.
My question is:
- Is it correct to assume that the underlying AVAssetTrack's preferredTransform property somehow causing CGContext to render drawings differently?
- If so, is there a way to fix this behavior irrespective of the camera used to capture video?