0

When using .cropping(to: ) on a CGImage the aspect ratio is not being respected.
As you can see the output cropped image height: 1080.0 should actually be 1350.0
Why is this crop not consistent with the defined rect?

These images are taken with the selfie camera in portrait orientation. I'm expecting all images taken from the selfie camera in portrait orientation to keep the original width but losing height when cropping. Crop will always be an aspect ratio of 4:5

let aspectRatio = CGFloat(4.0 / 5.0)
print("aspect ratio: \(aspectRatio)")
                                
let rect = CGRect(
    x: 0,
    y: 0,
    width: capture.size.width,
    height: CGFloat(capture.size.width / aspectRatio)
)

image = crop(
    image: capture,
    toRect: rect
)

private func crop(image: UIImage, toRect rect: CGRect) -> UIImage? {
    print("org image width: \(image.size.width)")
    print("org image height: \(image.size.height)")
    print("crop width: \(rect.size.width)")
    print("crop height: \(rect.size.height)")
        
    // Convert the UIImage to a CGImage
    guard let cgImage = image.cgImage else { return nil }
        
    // Apply the crop rect, adjusted for the image's scale
    print("img scale: \(image.scale)")
    print("rect x * scale: \(rect.origin.x * image.scale)")
    print("rect y * scale: \(rect.origin.y * image.scale)")
    print("rect width * scale: \(rect.size.width * image.scale)")
    print("rect height * scale: \(rect.size.height * image.scale)")
    let scaledRect = CGRect(
        x: rect.origin.x * image.scale,
        y: rect.origin.y * image.scale,
        width: rect.size.width * image.scale,
        height: rect.size.height * image.scale
    )
    guard let croppedCgImage = cgImage.cropping(to: scaledRect) else { return nil }
    print("cropped cgimage width: \(croppedCgImage.width)")
    print("cropped cgimage height: \(croppedCgImage.height)")
        
    // Create a new UIImage from the cropped CGImage
    let croppedImage = UIImage(cgImage: croppedCgImage, scale: image.scale, orientation: .leftMirrored)
    print("cropped image width: \(croppedImage.size.width)")
    print("cropped image height: \(croppedImage.size.height)")
        
    return croppedImage
}
aspect ratio: 0.8
org image width: 1080.0
org image height: 1920.0
crop width: 1080.0
crop height: 1350.0
img scale: 1.0
rect x * scale: 0.0
rect y * scale: 0.0
rect width * scale: 1080.0
rect height * scale: 1350.0
cropped cgimage width: 1080
cropped cgimage height: 1080
cropped image width: 1080.0
cropped image height: 1080.0
Ryan Sam
  • 2,858
  • 4
  • 19
  • 30
  • i think it is because u are changing the orientation. instead of ```leftMirrored``` try using the orientation of the ```image```. (image.orientation not sure if theres a property like this) – udi May 04 '23 at 10:34
  • @udi Keeping the original orientation (`UIImage(cgImage: croppedCgImage, scale: image.scale, orientation: image.imageOrientation)`) rendered the same results – Ryan Sam May 04 '23 at 10:53

1 Answers1

0

There are two separate problems here:

  1. You are not considering the orientation when building the CGRect of the CGImage. Remember that the CGImage represents the payload of the image before any scaling, orientation, etc. Your algorithm for calculating the CGRect considered the scale, but not the imageOrientation.

    To verify this, look at the cgImage.width and cgImage.height and I think you will find them transposed from the values you might have expected from the UIImage.

    To do this properly with CGImage, you need a switch statement with all eight different orientation values, and build a CGRect mapped to the CGImage accordingly.

  2. A minor point, but cropping(to:) will actually crop to the intersection of the CGRect and the image dimensions. In your case, this (combined with the prior point) is why you are getting 1080×1080.

Personally, I just re-render the image to the cropped dimensions if either the imageOrientation is not .up or if the cropping bounds exceeds the dimensions of the image. It keeps things simple:

extension UIImage {
    /// Crop the image to be the required size.
    ///
    /// - parameter bounds:    The bounds to which the new image should be cropped.
    ///
    /// - returns:             Cropped `UIImage`.

    func cropping(to bounds: CGRect) -> UIImage? {
        // if bounds is entirely within image, do simple CGImage `cropping` …

        if CGRect(origin: .zero, size: size).contains(bounds),
            imageOrientation == .up,
            let cropped = cgImage?.cropping(to: bounds * scale)
        {
            return UIImage(cgImage: cropped, scale: scale, orientation: imageOrientation)
        }

        // … otherwise, manually render whole image, only drawing what we need

        let format = UIGraphicsImageRendererFormat()
        format.opaque = false
        format.scale = scale

        return UIGraphicsImageRenderer(size: bounds.size, format: format).image { _ in
            let origin = CGPoint(x: -bounds.minX, y: -bounds.minY)
            draw(in: CGRect(origin: origin, size: size))
        }
    }
}

That uses these extensions:

extension CGSize {
    static func * (lhs: CGSize, rhs: CGFloat) -> CGSize {
        return CGSize(width: lhs.width * rhs, height: lhs.height * rhs)
    }
}

extension CGPoint {
    static func * (lhs: CGPoint, rhs: CGFloat) -> CGPoint {
        return CGPoint(x: lhs.x * rhs, y: lhs.y * rhs)
    }
}

extension CGRect {
    static func * (lhs: CGRect, rhs: CGFloat) -> CGRect {
        return CGRect(origin: lhs.origin * rhs, size: lhs.size * rhs)
    }
}

All of that having been said, you can also manually map the CGRect of UIImage to the appropriate CGRect of CGImage, factoring in each of the eight UIImage.Orientation values. It is not hard, but just a bit tedious.


Having gone this far down the rabbit hole, I should mention that building a UIImage from a cropped CGImage is computationally efficient, but not space-efficient. Notably, if you look at the bytesPerRow for the cropped image, it is the same as the bytes per row of the original image (even though there might actually be far fewer pixels per row!).

E.g., a 1000×2000px image takes 8mb. A 16×16px cropped image using CGImage takes ~60kb bytes. Both have 4,000 bytes per row (!). However if we rerender using UIGraphicsImageRenderer, the smaller image now takes only 1,024 bytes with 64 bytes per row. (Note, this size issue disappears if you save the asset and reload it, but just be conscious of the memory characteristics immediately after cropping.)

So, as you consider between “should I crop the CGImage” vs “should I just re-render the image”, consider not only the code complexity for all the orientations, but also the tradeoff between the speed of cropping and size of the resulting image.

Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • FWIW, https://stackoverflow.com/a/28513086/1271826 has my standard image resizing/cropping extension. – Rob May 04 '23 at 20:07