I think you should check the size
of the image first. If size
and scale
of both images are equal, check the pixel data directly for equality, and not the images PNG representation. this will be much faster. (The link shows you how to get the pixel data. To compare it, use memcmp
.)
From that post (slightly modified):
NSData *rawDataFromUIImage(UIImage *image)
{
assert(image);
// Get the image into the data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int byteSize = height * width * 4;
unsigned char *rawData = (unsigned char*) malloc(byteSize);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return [NSData dataWithBytes:rawData length:byteSize];
}
About why this is faster: UIImagePNGRepresentation
(1) fetches the raw binary data and then (2) converts it to PNG format. Skipping the second step can only improve performance, because it is much more work than just doing step 1. And memcmp
is faster than everything else in this example.