Are you using one of the scaling content modes in your image view? If so, the dimensions of the image different from the dimensions of the image view and you have two options:
You could just resize your image to match the dimensions of your image view before attempting the crop. Then a standard cropping routine would work. But that can result in either a loss of resolution or the introduction of pixelation.
The better solution is to transform the cropping rectangle to the coordinates of the image dimensions before cropping.
For example:
extension UIImageView {
func image(at rect: CGRect) -> UIImage? {
guard
let image = image,
let rect = convertToImageCoordinates(rect)
else {
return nil
}
return image.cropped(to: rect)
}
func convertToImageCoordinates(_ rect: CGRect) -> CGRect? {
guard let image = image else { return nil }
let imageSize = CGSize(width: image.size.width, height: image.size.height)
let imageCenter = CGPoint(x: imageSize.width / 2, y: imageSize.height / 2)
let imageViewRatio = bounds.width / bounds.height
let imageRatio = imageSize.width / imageSize.height
let scale: CGPoint
switch contentMode {
case .scaleToFill:
scale = CGPoint(x: imageSize.width / bounds.width, y: imageSize.height / bounds.height)
case .scaleAspectFit:
let value: CGFloat
if imageRatio < imageViewRatio {
value = imageSize.height / bounds.height
} else {
value = imageSize.width / bounds.width
}
scale = CGPoint(x: value, y: value)
case .scaleAspectFill:
let value: CGFloat
if imageRatio > imageViewRatio {
value = imageSize.height / bounds.height
} else {
value = imageSize.width / bounds.width
}
scale = CGPoint(x: value, y: value)
case .center:
scale = CGPoint(x: 1, y: 1)
// unhandled cases include
// case .redraw:
// case .top:
// case .bottom:
// case .left:
// case .right:
// case .topLeft:
// case .topRight:
// case .bottomLeft:
// case .bottomRight:
default:
fatalError("Unexpected contentMode")
}
var rect = rect
if rect.width < 0 {
rect.origin.x += rect.width
rect.size.width = -rect.width
}
if rect.height < 0 {
rect.origin.y += rect.height
rect.size.height = -rect.height
}
return CGRect(x: (rect.minX - bounds.midX) * scale.x + imageCenter.x,
y: (rect.minY - bounds.midY) * scale.y + imageCenter.y,
width: rect.width * scale.x,
height: rect.height * scale.y)
}
}
Now, I'm only handling four of the possible content modes, and if you want to handle more, you'd have to implement those yourself. But hopefully this illustrates the pattern, namely convert the selection CGRect
into coordinates within the image before attempting the crop.
FWIW, this is the cropping method I use, cropped(to:)
from https://stackoverflow.com/a/28513086/1271826, using the more contemporary UIGraphicsImageRenderer
, using CoreGraphics cropping if I can, etc. But use whatever cropping routine you want, but just make sure to transform the coordinates to something suitable for the image.
