13

I would like to take photos of A4 pieces of paper with writing on. Importantly, I want the text to be readable, but I do not want images with resolutions like 2592x1936 pixel or 3264x2448 pixel as that would be too big. Also, I assume that rescaling the photo after capturing takes extra time, so I would like to avoid this too.

We can choose between the following qualities:

UIImagePickerControllerQualityTypeHigh = 0   
UIImagePickerControllerQualityTypeMedium          = 1  default value   
UIImagePickerControllerQualityTypeLow            = 2   
UIImagePickerControllerQualityType640x480         = 3,   
UIImagePickerControllerQualityTypeIFrame1280x720  = 4,   
UIImagePickerControllerQualityTypeIFrame960x540   = 5 

If we were using the AVFoundation, we could choose resolutions from this nice table (under headline "Capturing Still Images").

But is there a similar table for UIImagePickerController, which for example says that UIImagePickerControllerQualityTypeHigh equals 1920x1080 on iPhone 3gs?

SahilS
  • 396
  • 5
  • 7
David
  • 4,786
  • 11
  • 52
  • 80

5 Answers5

19

UIImagepickerController quality is used for video recording (used in UIImagepickerController property "videoQuality").

I guess that if you want to specify what the exact photo resolution should be, you should use the AV Foundation framework instead of the UIImagepickerController. Or as you said, convert the picture afterwards.

To resize it afterwards (found here):

//  ==============================================================
//  resizedImage
//  ==============================================================
// Return a scaled down copy of the image.  

UIImage* resizedImage(UIImage *inImage, CGRect thumbRect)
{
    CGImageRef          imageRef = [inImage CGImage];
    CGImageAlphaInfo    alphaInfo = CGImageGetAlphaInfo(imageRef);

    // There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
    // see Supported Pixel Formats in the Quartz 2D Programming Guide
    // Creating a Bitmap Graphics Context section
    // only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
    // and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
    // The images on input here are likely to be png or jpeg files
    if (alphaInfo == kCGImageAlphaNone)
        alphaInfo = kCGImageAlphaNoneSkipLast;

    // Build a bitmap context that's the size of the thumbRect
    CGContextRef bitmap = CGBitmapContextCreate(
                NULL,
                thumbRect.size.width,       // width
                thumbRect.size.height,      // height
                CGImageGetBitsPerComponent(imageRef),   // really needs to always be 8
                4 * thumbRect.size.width,   // rowbytes
                CGImageGetColorSpace(imageRef),
                alphaInfo
        );

    // Draw into the context, this scales the image
    CGContextDrawImage(bitmap, thumbRect, imageRef);

    // Get an image from the context and a UIImage
    CGImageRef  ref = CGBitmapContextCreateImage(bitmap);
    UIImage*    result = [UIImage imageWithCGImage:ref];

    CGContextRelease(bitmap);   // ok if NULL
    CGImageRelease(ref);

    return result;
}

Hope this helps!

Community
  • 1
  • 1
Thermometer
  • 2,567
  • 3
  • 20
  • 41
  • The `alphaInfo` issue is probably due to the wrong type used. Use `CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);` instead. – Berik May 05 '14 at 08:12
5

In a word: no.

There is no "similar table" because you're misunderstanding how the likes of UIImagePickerControllerQualityTypeHigh are used by UIImagePickerController.

If you're capturing still images with UIImagePickerController you will get them at the default (i.e. maximum) quality of the device in question—anywhere from 8 MP with iPhone 4s to < 1 MP with iPod touch or iPad 2).

With AVFoundation, however, you have choices, thanks to the session presets to which you refer.

But unlike these AVFoundation session presets, the UIImagePickerController UIImagePickerControllerQualityType options only apply to motion video, not still image capture.

So you have the choice of using AVFoundation to control capture size, or re-sizing the full-size images before saving them; but UIImagePickerController can't do what you want, I'm afraid.

Wildaker
  • 2,533
  • 1
  • 17
  • 19
2

The problem with Thermometer's answer is that it screws up with the image orientation.

I've found this solution that solves the orientation problem and has a faster performance:

- (UIImage *)scaleAndRotateImage:(UIImage *)image {
    int kMaxResolution = 320; // Or whatever

    CGImageRef imgRef = image.CGImage;

    CGFloat width = CGImageGetWidth(imgRef);
    CGFloat height = CGImageGetHeight(imgRef);

    CGAffineTransform transform = CGAffineTransformIdentity;
    CGRect bounds = CGRectMake(0, 0, width, height);
    if (width > kMaxResolution || height > kMaxResolution) {
        CGFloat ratio = width/height;
        if (ratio > 1) {
            bounds.size.width = kMaxResolution;
            bounds.size.height = bounds.size.width / ratio;
        }
        else {
            bounds.size.height = kMaxResolution;
            bounds.size.width = bounds.size.height * ratio;
        }
    }

    CGFloat scaleRatio = bounds.size.width / width;
    CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));
    CGFloat boundHeight;
    UIImageOrientation orient = image.imageOrientation;
    switch(orient) {

        case UIImageOrientationUp: //EXIF = 1
            transform = CGAffineTransformIdentity;
            break;

        case UIImageOrientationUpMirrored: //EXIF = 2
            transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0);
            transform = CGAffineTransformScale(transform, -1.0, 1.0);
            break;

        case UIImageOrientationDown: //EXIF = 3
            transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
            transform = CGAffineTransformRotate(transform, M_PI);
            break;

        case UIImageOrientationDownMirrored: //EXIF = 4
            transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
            transform = CGAffineTransformScale(transform, 1.0, -1.0);
            break;

        case UIImageOrientationLeftMirrored: //EXIF = 5
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width);
            transform = CGAffineTransformScale(transform, -1.0, 1.0);
            transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
            break;

        case UIImageOrientationLeft: //EXIF = 6
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeTranslation(0.0, imageSize.width);
            transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
            break;

        case UIImageOrientationRightMirrored: //EXIF = 7
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeScale(-1.0, 1.0);
            transform = CGAffineTransformRotate(transform, M_PI / 2.0);
            break;

        case UIImageOrientationRight: //EXIF = 8
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
            transform = CGAffineTransformRotate(transform, M_PI / 2.0);
            break;

        default:
            [NSException raise:NSInternalInconsistencyException format:@"Invalid image orientation"];

    }

    UIGraphicsBeginImageContext(bounds.size);

    CGContextRef context = UIGraphicsGetCurrentContext();

    if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
        CGContextScaleCTM(context, -scaleRatio, scaleRatio);
        CGContextTranslateCTM(context, -height, 0);
    }
    else {
        CGContextScaleCTM(context, scaleRatio, -scaleRatio);
        CGContextTranslateCTM(context, 0, -height);
    }

    CGContextConcatCTM(context, transform);

    CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
    UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return imageCopy;
}
Community
  • 1
  • 1
marcelosalloum
  • 3,481
  • 4
  • 39
  • 63
2

For your Swift 2.1-Project, this is the translation of @marcelosalloum's code:

func scaleAndRotateImage(image: UIImage, kMaxResolution: CGFloat) -> UIImage {
    var imageCopy: UIImage = image
    if let imgRef: CGImageRef = image.CGImage {

        let width = CGFloat(CGImageGetWidth(imgRef))
        let height = CGFloat(CGImageGetHeight(imgRef))

        var transform = CGAffineTransformIdentity
        var bounds = CGRectMake(0, 0, width, height)

        if width > kMaxResolution || height > kMaxResolution {
            let ratio = width/height
            if ratio > 1 {
                bounds.size.width = kMaxResolution
                bounds.size.height = bounds.size.width / ratio
            } else {
                bounds.size.height = kMaxResolution
                bounds.size.width = bounds.size.height * ratio
            }
        }

        let scaleRatio = bounds.size.width / width
        let imageSize = CGSizeMake(width, height)
        let boundHeight: CGFloat
        let orient: UIImageOrientation = image.imageOrientation
        switch orient {
        case .Up:
            transform = CGAffineTransformIdentity

        case .UpMirrored:
            transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0)
            transform = CGAffineTransformScale(transform, -1.0, 1.0)

        case .Down:
            transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height)
            transform = CGAffineTransformRotate(transform, CGFloat(M_PI))

        case .DownMirrored: //EXIF = 4
            transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
            transform = CGAffineTransformScale(transform, 1.0, -1.0);

        case .LeftMirrored: //EXIF = 5
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width);
            transform = CGAffineTransformScale(transform, -1.0, 1.0);
            transform = CGAffineTransformRotate(transform, 3.0 * CGFloat(M_PI) / 2.0);

        case .Left: //EXIF = 6
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeTranslation(0.0, imageSize.width);
            transform = CGAffineTransformRotate(transform, 3.0 * CGFloat(M_PI) / 2.0);

        case .RightMirrored: //EXIF = 7
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeScale(-1.0, 1.0);
            transform = CGAffineTransformRotate(transform, CGFloat(M_PI) / 2.0);

        case .Right: //EXIF = 8
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
            transform = CGAffineTransformRotate(transform, CGFloat(M_PI) / 2.0);
        }
        UIGraphicsBeginImageContext(bounds.size)

        if let context: CGContextRef = UIGraphicsGetCurrentContext() {
            if orient == .Right || orient == .Left {
                CGContextScaleCTM(context, -scaleRatio, scaleRatio)
                CGContextTranslateCTM(context, -height, 0)
            } else {
                CGContextScaleCTM(context, scaleRatio, -scaleRatio)
                CGContextTranslateCTM(context, 0, -height)
            }

            CGContextConcatCTM(context, transform)

            CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0,0,width,height), imgRef)
            imageCopy = UIGraphicsGetImageFromCurrentImageContext()
            UIGraphicsEndImageContext()
        }
    }
    return imageCopy
}
DanKas
  • 95
  • 1
  • 11
2

A swift 3 translation of marcelosalloum's answer:

private func scale(image originalImage: UIImage, toLessThan maxResolution: CGFloat) -> UIImage? {
    guard let imageReference = originalImage.cgImage else { return nil }

    let rotate90 = CGFloat.pi/2.0 // Radians
    let rotate180 = CGFloat.pi // Radians
    let rotate270 = 3.0*CGFloat.pi/2.0 // Radians

    let originalWidth = CGFloat(imageReference.width)
    let originalHeight = CGFloat(imageReference.height)
    let originalOrientation = originalImage.imageOrientation

    var newWidth = originalWidth
    var newHeight = originalHeight

    if originalWidth > maxResolution || originalHeight > maxResolution {
        let aspectRatio: CGFloat = originalWidth / originalHeight
        newWidth = aspectRatio > 1 ? maxResolution : maxResolution * aspectRatio
        newHeight = aspectRatio > 1 ? maxResolution / aspectRatio : maxResolution
    }

    let scaleRatio: CGFloat = newWidth / originalWidth
    var scale: CGAffineTransform = .init(scaleX: scaleRatio, y: -scaleRatio)
    scale = scale.translatedBy(x: 0.0, y: -originalHeight)

    var rotateAndMirror: CGAffineTransform

    switch originalOrientation {
    case .up:
        rotateAndMirror = .identity

    case .upMirrored:
        rotateAndMirror = .init(translationX: originalWidth, y: 0.0)
        rotateAndMirror = rotateAndMirror.scaledBy(x: -1.0, y: 1.0)

    case .down:
        rotateAndMirror = .init(translationX: originalWidth, y: originalHeight)
        rotateAndMirror = rotateAndMirror.rotated(by: rotate180 )

    case .downMirrored:
        rotateAndMirror = .init(translationX: 0.0, y: originalHeight)
        rotateAndMirror = rotateAndMirror.scaledBy(x: 1.0, y: -1.0)

    case .left:
        (newWidth, newHeight) = (newHeight, newWidth)
        rotateAndMirror = .init(translationX: 0.0, y: originalWidth)
        rotateAndMirror = rotateAndMirror.rotated(by: rotate270)
        scale = .init(scaleX: -scaleRatio, y: scaleRatio)
        scale = scale.translatedBy(x: -originalHeight, y: 0.0)

    case .leftMirrored:
        (newWidth, newHeight) = (newHeight, newWidth)
        rotateAndMirror = .init(translationX: originalHeight, y: originalWidth)
        rotateAndMirror = rotateAndMirror.scaledBy(x: -1.0, y: 1.0)
        rotateAndMirror = rotateAndMirror.rotated(by: rotate270)

    case .right:
        (newWidth, newHeight) = (newHeight, newWidth)
        rotateAndMirror = .init(translationX: originalHeight, y: 0.0)
        rotateAndMirror = rotateAndMirror.rotated(by: rotate90)
        scale = .init(scaleX: -scaleRatio, y: scaleRatio)
        scale = scale.translatedBy(x: -originalHeight, y: 0.0)

    case .rightMirrored:
        (newWidth, newHeight) = (newHeight, newWidth)
        rotateAndMirror = .init(scaleX: -1.0, y: 1.0)
        rotateAndMirror = rotateAndMirror.rotated(by: CGFloat.pi/2.0)
    }

    UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
    guard let context = UIGraphicsGetCurrentContext() else { return nil }
    context.concatenate(scale)
    context.concatenate(rotateAndMirror)
    context.draw(imageReference, in: CGRect(x: 0, y: 0, width: originalWidth, height: originalHeight))
    let copy = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()

    return copy
  }
Cannoliopsida
  • 3,044
  • 5
  • 36
  • 61
Tom Counsell
  • 161
  • 1
  • 5