1

I got a Profile view with a ImageView where a User can change their picture. I'm saving my old & new image to compare them. I would like to know if they're the same, so if they are I dont need to push the new one to my Server.

I tried this but it doesn't really work:

+ (NSData*)returnImageAsData:(UIImage *)anImage {
    // Get an NSData representation of our images. We use JPEG for the larger image
    // for better compression and PNG for the thumbnail to keep the corner radius transparency
    float i_width = 400.0f;
    float oldWidth = anImage.size.width;
    float scaleFactor = i_width / oldWidth;

    float newHeight = anImage.size.height * scaleFactor;
    float newWidth = oldWidth * scaleFactor;

    UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
    [anImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    NSData *imageData = UIImageJPEGRepresentation(newImage, 0.5f);

    return imageData;
}

+ (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2
{

    NSData *data1 = [self returnImageAsData:image1];
    NSData *data2 = [self returnImageAsData:image2];

    return [data1 isEqual:data2];
}

Any idea how to check if two images are same?

End result:

+ (NSData*)returnImageAsData:(UIImage *)anImage {
    // Get an NSData representation of our images. We use JPEG for the larger image
    // for better compression and PNG for the thumbnail to keep the corner radius transparency
//    float i_width = 400.0f;
//    float oldWidth = anImage.size.width;
//    float scaleFactor = i_width / oldWidth;
//    
//    float newHeight = anImage.size.height * scaleFactor;
//    float newWidth = oldWidth * scaleFactor;
//    
//    UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
//    [anImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
//    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
//    UIGraphicsEndImageContext();

    NSData *imageData = UIImageJPEGRepresentation(anImage, 0.5f);

    return imageData;
}

+ (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2
{
    CGSize size1 = image1.size;
    CGSize size2 = image2.size;

    if (CGSizeEqualToSize(size1, size2)) {
        return YES;
    }

    NSData *data1 = UIImagePNGRepresentation(image1);
    NSData *data2 = UIImagePNGRepresentation(image2);

    return [data1 isEqual:data2];
}
SaifDeen
  • 862
  • 2
  • 16
  • 33
  • In all technicality, you could begin by checking the two images' dimensions and if they are different, no need to move any further, they are different. If they are the same dimension (in pixels) you could simply loop through all pixels and stop at the first pixel that is different and send the new one to server. If the loop moves through the end without finding a different pixel, then the image are the same. – trumpetlicks Feb 20 '14 at 21:22
  • Are you trying to compare two images that start off in different sizes and formats? The chances of getting an exact match after scaling both to the same size is essentially zero. All it takes is a single pixel having the slightest difference in color to cause your code to fail. – rmaddy Feb 20 '14 at 21:22
  • 1
    You can perform an `md5sum` on each image's data and if it matches then they're the same. – Tim Reddy Feb 20 '14 at 21:22
  • @TimReddy - the only question with that is, which is faster, and MD5 or checking pixel by pixel? – trumpetlicks Feb 20 '14 at 21:24
  • 1
    It seems that if you just MD5 the raw data, you avoid having to render the image into some context in order to check pixel by pixel. I'm not sure which is faster. The OP is rendering into a JPEG context at 50% quality. I'm sure there is some lossiness going on there to make it pretty difficult to get an exact match pixel by pixel. I agree with @maddy on his comment. – Tim Reddy Feb 20 '14 at 21:41
  • But what should I do if a user choose the same image ? I can't compare it cuz my saved image has of course different size (that I scaled it to) and it will return the true always, right? – SaifDeen Feb 21 '14 at 09:51
  • possible duplicate of [Comparing UIImage](http://stackoverflow.com/questions/3400707/comparing-uiimage) – hpique May 19 '14 at 06:03

2 Answers2

1

If you want to see if the 2 images are pixel-identical, it should be pretty easy.

Saving the images to JPEG is likely to cause problems because JPEG is a lossy format.

As others have suggested, first make sure the height and width of both images match. If not, stop. The images are different.

If those match, use a function like UIImagePNGRepresentation() to convert the image to a lossless data format. Then use isEqual on the NSData objets you get back.

If you want to check if the images LOOK the same (like 2 photographs of the same scene), you have a much, much harder problem on your hands. You might have to resort to a package like OpenCV to compare the images.

EDIT: I don't know if UIImage has a custom implementation of isEqual that you can use to compare two images. I'd try that first.

Looking at the docs, UIImage also conforms to NSCoding, so you could use archivedDataWithRootObject to convert the images to data. That would probably be faster than PNG encoding them.

Finally, you could get a pointer to the images' underlying CGImage objects, get their data providers, and compare their byte-streams that way.

Duncan C
  • 128,072
  • 22
  • 173
  • 272
  • Sorry that answer just yet. The problem I can't check the dimensions of the image. Cuz maybe he is choosing the same image that he already picked but I changed its sizes. So it will fail, that's why I change the sizes for both first and then I would like to compare them. – SaifDeen Feb 21 '14 at 09:44
  • 1
    You changed the sizes? Ok then. Different approach. When the user picks an image, record it's size and it's hash. Then use that info when you need to compare them. Comparing images that have been scaled is going to be quite difficult. – Duncan C Feb 21 '14 at 12:38
1

step 1, is to shrink the size. Step 2, simplify the color. step 3, is to calculate the average. step 4, compare the pixel gray. Step 5, calculate the hash value.

The following step by step: The first step is to shrink the size. Reduce the size to 8x8 for a total of 64 pixels. The role of this step is to remove the details of the picture, only to retain the structure, light and other basic information, to abandon the different size, the proportion of the picture difference.

-(UIImage * ) OriginImage:(UIImage **)image scaleToSize:(CGSize)size
{
    UIGraphicsBeginImageContext(size);

    [image drawInRect:CGRectMake(0, 0, size.width, size.height)];

    UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return scaledImage;
}

Step 2, simplify the color. Will shrink the picture, to 64 grayscale. That is, all pixels have a total of 64 colors.

    -(UIImage*)getGrayImage:(UIImage*)sourceImage
{

    int width = sourceImage.size.width;
    int height = sourceImage.size.height;
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    CGContextRef context = CGBitmapContextCreate (nil,width,height,8,0,colorSpace,kCGImageAlphaNone);
    CGColorSpaceRelease(colorSpace);
    if (context == NULL) {
        return nil;
    }
    CGContextDrawImage(context,CGRectMake(0, 0, width, height), sourceImage.CGImage);
    UIImage *grayImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
    CGContextRelease(context);
    return grayImage;
}

The step 3 is to calculate the average. Calculate the gray scale of all 64 pixels.

    -(unsigned char*) grayscalePixels:(UIImage *) image
{
    // The amount of bits per pixel, in this case we are doing grayscale so 1 byte = 8 bits
#define BITS_PER_PIXEL 8
    // The amount of bits per component, in this it is the same as the bitsPerPixel because only 1 byte represents a pixel
#define BITS_PER_COMPONENT (BITS_PER_PIXEL)
    // The amount of bytes per pixel, not really sure why it asks for this as well but it's basically the bitsPerPixel divided by the bits per component (making 1 in this case)
#define BYTES_PER_PIXEL (BITS_PER_PIXEL/BITS_PER_COMPONENT)

    // Define the colour space (in this case it's gray)
    CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();

    // Find out the number of bytes per row (it's just the width times the number of bytes per pixel)
    size_t bytesPerRow = image.size.width * BYTES_PER_PIXEL;
    // Allocate the appropriate amount of memory to hold the bitmap context
    unsigned char* bitmapData = (unsigned char*) malloc(bytesPerRow*image.size.height);

    // Create the bitmap context, we set the alpha to none here to tell the bitmap we don't care about alpha values
    CGContextRef context = CGBitmapContextCreate(bitmapData,image.size.width,image.size.height,BITS_PER_COMPONENT,bytesPerRow,colourSpace,kCGImageAlphaNone);

    // We are done with the colour space now so no point in keeping it around
    CGColorSpaceRelease(colourSpace);

    // Create a CGRect to define the amount of pixels we want
    CGRect rect = CGRectMake(0.0,0.0,image.size.width,image.size.height);
    // Draw the bitmap context using the rectangle we just created as a bounds and the Core Graphics Image as the image source
    CGContextDrawImage(context,rect,image.CGImage);
    // Obtain the pixel data from the bitmap context
    unsigned char* pixelData = (unsigned char*)CGBitmapContextGetData(context);

    // Release the bitmap context because we are done using it
    CGContextRelease(context);

    return pixelData;
#undef BITS_PER_PIXEL
#undef BITS_PER_COMPONENT
}

Return is 0101 string

    -(NSString *) myHash:(UIImage *) img
{
    unsigned char* pixelData = [self grayscalePixels:img];

    int total = 0;
    int ave = 0;
    for (int i = 0; i < img.size.height; i++) {
        for (int j = 0; j < img.size.width; j++) {
            total += (int)pixelData[(i*((int)img.size.width))+j];
        }
    }
    ave = total/64;
    NSMutableString *result = [[NSMutableString alloc] init];
    for (int i = 0; i < img.size.height; i++) {
        for (int j = 0; j < img.size.width; j++) {
            int a = (int)pixelData[(i*((int)img.size.width))+j];
            if(a >= ave)
            {
                [result appendString:@"1"];
            }
            else
            {
                [result appendString:@"0"];
            }
        }
    }
    return result;
}

Step 5, calculate the hash value. Will be the result of the previous comparison, together, it constitutes a 64-bit integer, this is the fingerprint of this picture. The order of the combination is not important, as long as all the pictures are guaranteed to use the same order on the line. Get fingerprints later, you can compare the different pictures to see how many bits in 64 are not the same. In theory, this is equivalent to calculating the "Hamming distance" (Hamming distance). If not the same data bit does not exceed 5, it means that two pictures are very similar; if more than 10, it shows that this is two different pictures.

0111111011110011111100111110000111000001100000011110001101111010 1111111111110001111000011110000111000001100000011110000111111011

wormlxd
  • 514
  • 3
  • 7