5

As described in the title, when I try to do the following divsion I get two different results depending on the architecture of the device:

unsigned int a = 42033;
unsigned int b = 360;
unsigned int c = 466
double result = a / (double)(b * c);
// on arm64 -> result = 0.25055436337625181
// on armv7 -> result = 0.24986030696800732

Why the results do not match?

According to Apple 64-Bit Transition Guide for Cocoa Touch these data type have the same size in 32 and 64 bit runtime.

EDIT

The complete code:

#import "UIImage+MyCategory.h"

#define CLIP_THRESHOLD 0.74 // if this much of the image is the clip color, leave it alone

typedef struct {
    unsigned int leftNonColorIndex;
    unsigned int rightNonColorIndex;
    unsigned int nonColorCount;
} scanLineResult;

static inline scanLineResult scanOneLine(unsigned int *scanline, unsigned int count, unsigned int color, unsigned int mask) {
    scanLineResult result = {UINT32_MAX, 0, 0};

    for (int i = 0; i < count; i++) {
        if ((*scanline++ & mask) != color) {
            result.nonColorCount++;
            result.leftNonColorIndex = MIN(result.leftNonColorIndex, i);
            result.rightNonColorIndex = MAX(result.rightNonColorIndex, i);
        }
    }

    return result;
}

typedef struct {
    unsigned int leftNonColorIndex;
    unsigned int topNonColorIndex;
    unsigned int rightNonColorIndex;
    unsigned int bottomNonColorIndex;
    unsigned int nonColorCount;

    double colorRatio;
} colorBoundaries;

static colorBoundaries findTrimColorBoundaries(unsigned int *buffer,
                                               unsigned int width,
                                               unsigned int height,
                                               unsigned int bytesPerRow,
                                               unsigned int color,
                                               unsigned int mask)
{
    colorBoundaries result = {UINT32_MAX, UINT32_MAX, 0, 0, 0.0};
    unsigned int *currentLine = buffer;

    for (int i = 0; i < height; i++) {
        scanLineResult lineResult = scanOneLine(currentLine, width, color, mask);
        if (lineResult.nonColorCount) {
            result.nonColorCount += lineResult.nonColorCount;
            result.topNonColorIndex = MIN(result.topNonColorIndex, i);
            result.bottomNonColorIndex = MAX(result.bottomNonColorIndex, i);
            result.leftNonColorIndex = MIN(result.leftNonColorIndex, lineResult.leftNonColorIndex);
            result.rightNonColorIndex = MAX(result.rightNonColorIndex, lineResult.rightNonColorIndex);
        }

        currentLine = (unsigned int *)((char *)currentLine + bytesPerRow);
    }

    double delta = result.nonColorCount / (double)(width * height);
    result.colorRatio = 1.0 - delta;

    return result;
}

@implementation UIImage (MyCategory)

- (UIImage *)crop:(CGRect)rect {

    rect = CGRectMake(rect.origin.x * self.scale,
                      rect.origin.y * self.scale,
                      rect.size.width * self.scale,
                      rect.size.height * self.scale);

    CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
    UIImage *result = [UIImage imageWithCGImage:imageRef
                                          scale:self.scale
                                    orientation:self.imageOrientation];
    CGImageRelease(imageRef);
    return result;
}

- (UIImage*)trimWhiteBorders {
#ifdef __BIG_ENDIAN__
    // undefined
#else
    const unsigned int whiteXRGB = 0x00ffffff;
    // Which bits to actually check
    const unsigned int maskXRGB = 0x00ffffff;
#endif

    CGImageRef image = [self CGImage];
    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(image);

    // Only support default image formats
    if (bitmapInfo != (kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Host))
        return nil;

    CGDataProviderRef dataProvider = CGImageGetDataProvider(image);
    CFDataRef imageData = CGDataProviderCopyData(dataProvider);

    colorBoundaries result = findTrimColorBoundaries((unsigned int *)CFDataGetBytePtr(imageData),
                                                     (unsigned int)CGImageGetWidth(image),
                                                     (unsigned int)CGImageGetHeight(image),
                                                     (unsigned int)CGImageGetBytesPerRow(image),
                                                     whiteXRGB,
                                                     maskXRGB);

    CFRelease(imageData);

    if (result.nonColorCount == 0 || result.colorRatio > CLIP_THRESHOLD)
        return self;

    CGRect trimRect = CGRectMake(result.leftNonColorIndex,
                                 result.topNonColorIndex,
                                 result.rightNonColorIndex - result.leftNonColorIndex + 1,
                                 result.bottomNonColorIndex - result.topNonColorIndex + 1);


    return [self crop:trimRect];
}

@end
maross
  • 409
  • 1
  • 3
  • 9
  • 2
    There is something suspicious about your armv7 result - post the actual code you used, the build command line, compiler version, etc. Also, did you use `double` or did you use `CGFloat` ? – Paul R Apr 08 '15 at 15:53
  • I've edited the question, I used Apple LLVM 6.0 with Xcode 6.2 – maross Apr 08 '15 at 16:16
  • There is a similar question, but for Visual Studio (PC) builds: http://stackoverflow.com/questions/22710272/difference-in-floating-point-arithmetics-between-x86-and-x64. It might be that a similar explanation applies for the arm family of processors too. – Cristik Apr 08 '15 at 17:58
  • 2
    Your "complete code" doesn't use the values you originally posted. If those four lines are enough to cause the problem, don't post more. But do post the other information. – Teepeemm Apr 08 '15 at 19:13
  • integrated FPU or software? – Spektre Apr 09 '15 at 07:42
  • @Spektre: These are Apple ARM parts. They're all high-end ARM cores with FPUs and NEON. Furthermore, Apple doesn't indulge in the soft-float-abi brain-damange found on Android. – marko Apr 11 '15 at 07:56
  • 1
    I would check the instructions generated for `double result = a / (double)(b * c);` and check that it is double precision throughout. You might try casting `a` to `double` for belt and braces, and I don't think you should need it. The error is about 0.28% Also, be aware that double precision floating point is expensive on ArmV7 parts- and likely, divide particularly so. You might want to check whether the default compiler flags don't enable a trade-off between less compliant IEEE 754 and higher performance. – marko Apr 11 '15 at 07:57
  • There exists an alternative algorithm for floating point division, which technically finds the inverse of the divisor (via smt like the Newton–Raphson method) and then performs multiplication. It's much faster than the classic long division (alias digit-recurrence algorithm/s) IEE 754-conforming implementation, but loses lots of precision. This could very well be the case. I also heard that sometimes other stuff (like Goldschmidt division) can be applied, too.. –  Apr 24 '15 at 16:14
  • 3
    For the "wrong" operation `b` (presumably `width`) has a value of 361 (yielding a result of 0.24986030696800732). Has nothing to do with the precision of the operations. (Reproduced this in Java, on a Windows-based PC.) – Hot Licks Apr 24 '15 at 18:56

2 Answers2

0

Tested the code below in Xcode 6.3.1 using iOS 8.3 LLVM 6.1 on an iPad 3(armv7) and an iPhone 6 (arm64) and they produced the same values to at least 15 points of precision.

unsigned int a = 42033;
unsigned int b = 360;
unsigned int c = 466;
double result = a / (double)(b * c);
// on arm64 -> result = 0.25055436337625181
// on armv7 -> result = 0.24986030696800732

NSString *msg = [NSString stringWithFormat:@"result: %.15f", result];

[[[UIAlertView alloc] initWithTitle:@"" message:msg delegate:nil cancelButtonTitle:@"#jolo" otherButtonTitles:nil] show];

That being said, Xcode 6.3 includes LVVM 6.1 and that includes changes for arm64 and floating point math. See the Apple LLVM Compiler Version 6.1 section in the release notes. https://developer.apple.com/library/ios/releasenotes/DeveloperTools/RN-Xcode/Chapters/xc6_release_notes.html

0

Java code:

public class Division {
    public static void main(String[] args) {
        int a = 42033;
        int b = 360;
        int c = 466;
        double result = a / (double)(b * c);
        System.out.println("Result = " + result);
        double result2 = (a - 1) / (double) (b * c);
        double result3 = (a) / (double) ((b + 1) * c);
        double result4 = (a) / (double) (b * (c + 1));
        System.out.println("Result2 = " + result2);
        System.out.println("Result3 = " + result3);
        System.out.println("Result4 = " + result4);
    }
}

Results:

C:\JavaTools>java Division
Result = 0.2505543633762518
Result2 = 0.250548402479733
Result3 = 0.24986030696800732
Result4 = 0.25001784439685937

As can be seen, the "wrong" results are explained by having a value for b other than what was stated by the OP. Has nothing to do with the precision of the arithmetic.

Hot Licks
  • 47,103
  • 17
  • 93
  • 151