Can you explain me why this code:
NSInteger i = -1;
NSUInteger x = 1;
NSLog(@"min = %lu", MIN(i, x));
NSLog(@"max = %lu", MAX(i, x));;
prints min = 1
max = 18446744073709551615
Can you explain me why this code:
NSInteger i = -1;
NSUInteger x = 1;
NSLog(@"min = %lu", MIN(i, x));
NSLog(@"max = %lu", MAX(i, x));;
prints min = 1
max = 18446744073709551615
You compare two different types: signed (NSInteger) and unsigned (NSUInteger). MIN/MAX convert all to unsigned integer.
Moreover, negative NSInteger is printed with %lu instead of %du. Therefore see a big number.
NSInteger i = -1;
NSUInteger x = 1;
NSLog(@"min = %ld", MIN(i, (NSInteger)x));
NSLog(@"max = %ld", MAX(i, (NSInteger)x));
It is because i is being implicitly converted to an unsigned long. It is part of the way xcode handles integer conversions. Here is a similar post. NSUInteger vs NSInteger, int vs unsigned, and similar cases