15

After applying a 3d transform to an UIImageView.layer, I need to save the resulting "view" as a new UIImage... Seemed like a simple task at first :-) but no luck so far, and searching hasn't turned up any clues :-( so I'm hoping someone will be kind enough to point me in the right direction.

A very simple iPhone project is available here.

Thanks.

- (void)transformImage {
    float degrees = 12.0;
    float zDistance = 250;
    CATransform3D transform3D = CATransform3DIdentity;
    transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
    transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
    imageView.layer.transform = transform3D;
}

/* FAIL : capturing layer contents doesn't get the transformed image -- just the original

CGImageRef newImageRef = (CGImageRef)imageView.layer.contents;

UIImage *image = [UIImage imageWithCGImage:newImageRef];

*/


/* FAIL : docs for renderInContext states that it does not render 3D transforms

UIGraphicsBeginImageContext(imageView.image.size);

[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];

UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

*/
//
// header
//
#import <QuartzCore/QuartzCore.h>
#define DEGREES_TO_RADIANS(x) x * M_PI / 180
UIImageView *imageView;
@property (nonatomic, retain) IBOutlet UIImageView *imageView;

//
// code
//
@synthesize imageView;

- (void)transformImage {
    float degrees = 12.0;
    float zDistance = 250;
    CATransform3D transform3D = CATransform3DIdentity;
    transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
    transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
    imageView.layer.transform = transform3D;
}

- (UIImage *)captureView:(UIImageView *)view {
    UIGraphicsBeginImageContext(view.frame.size);
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return newImage;
}

- (void)imageSavedToPhotosAlbum:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo {
    NSString *title = @"Save to Photo Album";
    NSString *message = (error ? [error description] : @"Success!");
    UIAlertView *alert = [[UIAlertView alloc] initWithTitle:title message:message delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
    [alert show];
    [alert release];
}

- (IBAction)saveButtonClicked:(id)sender {
    UIImage *newImage = [self captureView:imageView];
    UIImageWriteToSavedPhotosAlbum(newImage, self, @selector(imageSavedToPhotosAlbum: didFinishSavingWithError: contextInfo:), nil);  
}
hfossli
  • 22,616
  • 10
  • 116
  • 130
David Balmer
  • 159
  • 1
  • 4
  • Have you tried grabbing the view containing your transformed view? – Kenny Winker Jan 08 '10 at 08:27
  • Not quite sure what you mean... I've tried [view image] (where view is my UIImageView), but the image returned is not the transformed image. – David Balmer Jan 12 '10 at 14:04
  • 1
    Find an answer for this? – Steven Baughman May 17 '12 at 21:00
  • 1
    Did you got answer for this question? Please help me. I have the same problem, After change 3D Transform and save the image. Is it possible? – Mani Dec 20 '12 at 13:58
  • That's because the layer contents are not transformed, and `[imageView.layer renderInContext:...]` renders in the coordinate system of `imageView`. The transform is applied between the view and its superview, so for this to have *any* chance of working you would need to stick it in a container view and render the container (but if the docs say it doesn't handle 3D transforms, then it probably won't work anyway, though I would expect *some* sort of transform to be applied). – tc. Apr 24 '13 at 23:03
  • Load the UIImage into a UIWebView (via HTML injection) and then transform the image there (via CSS3 or Javascript injection) and then user `renderInContext` on your UIWebView. – Albert Renshaw Jul 29 '13 at 21:16
  • (*Side note, you can set a UIWebView's background to transparent with CSS (and by ALSO toggling the background settings in objective-c) so that your 3D transformed image isn't enclosed by an opaque blank white square) – Albert Renshaw Jul 29 '13 at 21:19

9 Answers9

8

I ended up creating a render method pixel per pixel on the CPU using the inverse of the view transform.

Basically, it renders the original UIImageView into a UIImage. Then every pixel in the UIImage is multiplied by the inverse transform matrix to generate the transformed UIImage.

RenderUIImageView.h

#import <UIKit/UIKit.h>
#import <QuartzCore/CATransform3D.h>
#import <QuartzCore/CALayer.h>

@interface RenderUIImageView : UIImageView

- (UIImage *)generateImage;

@end

RenderUIImageView.m

#import "RenderUIImageView.h"

@interface RenderUIImageView()

@property (assign) CATransform3D transform;
@property (assign) CGRect rect;

@property (assign) float denominatorx;
@property (assign) float denominatory;
@property (assign) float denominatorw;

@property (assign) float factor;

@end

@implementation RenderUIImageView


- (UIImage *)generateImage
{

    _transform = self.layer.transform;

    _denominatorx = _transform.m12 * _transform.m21 - _transform.m11  * _transform.m22 + _transform.m14 * _transform.m22 * _transform.m41 - _transform.m12 * _transform.m24 * _transform.m41 - _transform.m14 * _transform.m21 * _transform.m42 +
    _transform.m11 * _transform.m24 * _transform.m42;

    _denominatory = -_transform.m12 *_transform.m21 + _transform.m11 *_transform.m22 - _transform.m14 *_transform.m22 *_transform.m41 + _transform.m12 *_transform.m24 *_transform.m41 + _transform.m14 *_transform.m21 *_transform.m42 -
    _transform.m11* _transform.m24 *_transform.m42;

    _denominatorw = _transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14 *_transform.m22 *_transform.m41 - _transform.m12 *_transform.m24 *_transform.m41 - _transform.m14 *_transform.m21 *_transform.m42 +
    _transform.m11 *_transform.m24 *_transform.m42;

    _rect = self.bounds;

    if (UIGraphicsBeginImageContextWithOptions != NULL) {

        UIGraphicsBeginImageContextWithOptions(_rect.size, NO, 0.0);
    } else {
        UIGraphicsBeginImageContext(_rect.size);
    }

    if ([[UIScreen mainScreen] respondsToSelector:@selector(displayLinkWithTarget:selector:)] &&
        ([UIScreen mainScreen].scale == 2.0)) {
        _factor = 2.0f;
    } else {
        _factor = 1.0f;
    }


    UIImageView *img = [[UIImageView alloc] initWithFrame:_rect];
    img.image = self.image;

    [img.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *source = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    CGContextRef ctx;
    CGImageRef imageRef = [source CGImage];
    NSUInteger width = CGImageGetWidth(imageRef);
    NSUInteger height = CGImageGetHeight(imageRef);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    unsigned char *inputData = malloc(height * width * 4);
    unsigned char *outputData = malloc(height * width * 4);

    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * width;
    NSUInteger bitsPerComponent = 8;

    CGContextRef context = CGBitmapContextCreate(inputData, width, height,
                                                 bitsPerComponent, bytesPerRow, colorSpace,
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
    CGContextRelease(context);

    context = CGBitmapContextCreate(outputData, width, height,
                                    bitsPerComponent, bytesPerRow, colorSpace,
                                    kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
    CGContextRelease(context);


    for (int ii = 0 ; ii < width * height ; ++ii)
    {
        int x = ii % width;
        int y = ii / width;
        int indexOutput = 4 * x + 4 * width * y;

        CGPoint p = [self modelToScreen:(x*2/_factor - _rect.size.width)/2.0 :(y*2/_factor - _rect.size.height)/2.0];

        p.x *= _factor;
        p.y *= _factor;

        int indexInput = 4*(int)p.x + (4*width*(int)p.y);

        if (p.x >= width || p.x < 0 || p.y >= height || p.y < 0 || indexInput >  width * height *4)
        {
            outputData[indexOutput] = 0.0;
            outputData[indexOutput+1] = 0.0;
            outputData[indexOutput+2] = 0.0;
            outputData[indexOutput+3] = 0.0;

        }
        else
        {
            outputData[indexOutput] = inputData[indexInput];
            outputData[indexOutput+1] = inputData[indexInput + 1];
            outputData[indexOutput+2] = inputData[indexInput + 2];
            outputData[indexOutput+3] = 255.0;
        }
    }

    ctx = CGBitmapContextCreate(outputData,CGImageGetWidth( imageRef ),CGImageGetHeight( imageRef ),8,CGImageGetBytesPerRow( imageRef ),CGImageGetColorSpace( imageRef ), kCGImageAlphaPremultipliedLast );

    imageRef = CGBitmapContextCreateImage (ctx);

    UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
    CGContextRelease(ctx);
    free(inputData);
    free(outputData);
    return rawImage;
}

- (CGPoint) modelToScreen : (float) x: (float) y
{
    float xp = (_transform.m22 *_transform.m41 - _transform.m21 *_transform.m42 - _transform.m22* x + _transform.m24 *_transform.m42 *x + _transform.m21* y - _transform.m24* _transform.m41* y) / _denominatorx;        
    float yp = (-_transform.m11 *_transform.m42 + _transform.m12 * (_transform.m41 - x) + _transform.m14 *_transform.m42 *x + _transform.m11 *y - _transform.m14 *_transform.m41* y) / _denominatory;        
    float wp = (_transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14*_transform.m22* x - _transform.m12 *_transform.m24* x - _transform.m14 *_transform.m21* y + _transform.m11 *_transform.m24 *y) / _denominatorw;

    CGPoint result = CGPointMake(xp/wp, yp/wp);
    return result;
}

@end
Marcos Fuentes
  • 105
  • 2
  • 4
  • Did you write this code yourself? What is the license? Since it is on stackoverflow I guess it is up for grabs? – hfossli Mar 22 '13 at 15:14
  • Just wanting to let you know I have refactored and improved upon this code. You can check it out here https://github.com/hfossli/AGGeometryKit/blob/master/Source/AGTransformPixelMapper.m and https://github.com/hfossli/AGGeometryKit/blob/master/Source/CGImageRef%2BCATransform3D.m – hfossli Apr 24 '13 at 22:48
  • @MarcosFuentes,hfossli, Can you explain why an inverse transform is used instead of the transform itself. – Vignesh Jan 09 '15 at 17:57
2

Theoretically, you could use the (now-allowed) undocumented call UIGetScreenImage() after quickly rendering it to the screen on a black background, but in practice this will be slow and ugly, so don't use it ;P.

Grant Paul
  • 5,852
  • 2
  • 33
  • 36
  • UIGetScreenImage is allowed now http://www.steveperks.co.uk/post/Apple-Allows-UIGetScreenImage-For-iPhone.aspx – slf Jan 05 '10 at 17:30
  • Tried calling UIGetScreenImage to test this out, but am getting "warning: implicit declaration of function 'UIGetScreenImage'". Which header do I need to include? – David Balmer Jan 12 '10 at 14:08
  • 1
    It's private API, so it probably isn't in any headers. Just put your own declaration (`extern CGImageRef UIGetScreenImage();`) somewhere in the file that calls it. If the function is not actually there the linker will complain. – benzado Jan 12 '10 at 19:10
2

I have the same problem with you and I found the solution! I want to rotate the UIImageView, because I will have the animation. And save the image, I use this method:

void CGContextConcatCTM(CGContextRef c, CGAffineTransform transform)

the transform param is the transform of your UIImageView!. So anything you have done to the imageView will be the same with image!. And I have write a category method of UIImage.

-(UIImage *)imageRotateByTransform:(CGAffineTransform)transform{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
rotatedViewBox.transform = transform;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];

// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();

// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);

//Rotate the image context using tranform
CGContextConcatCTM(bitmap, transform);
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);

UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;

}

Hope this will help you.

yuyi
  • 31
  • 1
  • 7
    This only works with 2-D transforms. The application of a CATransform3D is not something you can capture using code like this. – Brad Larson Jan 26 '12 at 21:29
  • @BradLarson is there any progress on rendering CAtransform3d i have figured it out with my own buggy code in large context, but the problem with that is i am getting black area with the transform3d Image. – umer sufyan Sep 06 '13 at 04:33
1

Have you had a look at this? UIImage from UIView

slf
  • 22,595
  • 11
  • 77
  • 101
1

I had the same problem, I was able to use UIView's drawViewHierarchyInRect:afterScreenUpdates: method, from iOS 7.0 - (Documentation)

It draws the whole tree as it appears on the screen.

UIGraphicsBeginImageContextWithOptions(viewToRender.bounds.size, YES, 0);
[viewToRender drawViewHierarchyInRect:viewToRender.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
1

Let say you have and UIImageView called imageView. If you apply 3d transform and try to render this view with UIGraphicsImageRenderer transforms are ignored.

imageView.layer.transform = someTransform3d

but if you convert CATransform3d to CGAffine transform using CATransform3DGetAffineTransform and apply it to transform property of image view, it works.

 imageView.transform = CATransform3DGetAffineTransform(someTransform3d)

And then, you can use the extension below to save it as UIImage

extension UIView {
    func asImage() -> UIImage {
        let renderer = UIGraphicsImageRenderer(bounds: bounds)
        return renderer.image { rendererContext in
            layer.render(in: rendererContext.cgContext)
        }
    }
}

And just call

let image = imageView.asImage()
Ozgur Sahin
  • 1,305
  • 16
  • 24
0

3D transform on UIImage / CGImageRef

I've improved Marcos Fuentes answer. You should be able to calculate the mapping of each pixel yourself.. Not perfect, but it does the trick...

It is available on this repository http://github.com/hfossli/AGGeometryKit/

The interesting files is

https://github.com/hfossli/AGGeometryKit/blob/master/Source/AGTransformPixelMapper.m

https://github.com/hfossli/AGGeometryKit/blob/master/Source/CGImageRef%2BCATransform3D.m

https://github.com/hfossli/AGGeometryKit/blob/master/Source/UIImage%2BCATransform3D.m


3D transform on UIView / UIImageView

https://stackoverflow.com/a/12820877/202451

Then you will have full control over each point in the quadrilateral. :)

Community
  • 1
  • 1
hfossli
  • 22,616
  • 10
  • 116
  • 130
  • Will you take a look at this http://stackoverflow.com/questions/16908331/catransform3drotate-effects-gone-after-applying-anchor-point – user523234 Jun 04 '13 at 22:12
0

In your captureView: method, try replacing this line:

[view.layer renderInContext:UIGraphicsGetCurrentContext()];

with this:

[view.layer.superlayer renderInContext:UIGraphicsGetCurrentContext()];

You may have to adjust the size you use to create the image context.

I don't see anything in the API doc that says renderInContext: ignores 3D transformations. However, the transformations apply to the layer, not its contents, which is why you need to render the superlayer to see the transformation applied.

Note that calling drawRect: on the superview definitely won't work, as drawRect: does not draw subviews.

benzado
  • 82,288
  • 22
  • 110
  • 138
  • 2
    Calling renderInContext on the superlayer gives me something that looks more like a screenshot, but still doesn't render the 3d transform. CALayer docs say: **Important**: The Mac OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered... – David Balmer Jan 12 '10 at 14:12
  • Bummer, dude. I think you need to use UIGetScreenImage() or write your own low-level thing to transform pixels from one buffer into another. – benzado Jan 12 '10 at 19:08
-1

A solution I found that at least worked in my case was to subclass CALayer. When a renderInContext: message is sent to a layer, that layer automatically forwards that message to all its sublayers. So all I had to do was to subclass CALayer and override the renderInContext: method and render what I needed to be rendered in the provided context.

For example, in my code I had a layer for which I was setting its contents to an image of an arrow:

UIImage *image = [UIImage imageNamed:@"arrow.png"];
MYLayer *myLayer = [[CALayer alloc] init];
[myLayer setContents:(__bridge id)[image CGImage]];
[self.mainLayer addSublayer:myLayer];

Now when I was applying a 3D 180 degree rotation over the Y-axis on the arrow and was trying to do a [self.mainLayer renderInContext:context] afterwards I was still getting the un-rotated image.

So in my subclass MyLayer I overrode renderInContext: and used an already rotated image to draw in provided context:

- (void)renderInContext:(CGContextRef)ctx
{
    NSLog(@"Rendered in context");
    UIImage *image = [UIImage imageNamed:@"arrow_rotated.png"];
    CGContextDrawImage(ctx, self.bounds, image.CGImage);
}

This worked in my case, however I can see that if you are doing lots of 3D transforms you may not be able to have an image ready for every possible scenario. In many other cases though it should be possible to render the result of 3D transform using 2D transforms in the passed context. For example in my case instead of using a different image arrow_rotated.png I could use the arrow.png image and mirror it and draw it in the context.

Arash
  • 1,286
  • 1
  • 8
  • 14