2

In am working on something and came across code similar to the following:

#define MODULUS(a,b)        ((a) >= 0 ? (a)%(b) : (b)-(-a)%(b))

unsigned char w;
unsigned char x;
unsigned char y;
char z;

/* Code that assigns values to w,x and y.  All values assigned 
   could have been represented by a signed char. */

z = MODULUS((x - y), w);

It is my understanding that the arithmetic (x - y) will be accomplished prior to any type conversion and the macro will always evaluate to (a)%(b) -- as the result will be an unsigned char which is always greater than or equal to zero. However, the code functions as intended and I think that my understanding is flawed. So...

My questions are these:

  1. Does an implicit type conversion to signed char occur before the expression is evaluated?

  2. Is there a situation (for example, if the unsigned values were large enough that they could not be represented by a signed value) where the above code would not work?

embedded_guy
  • 1,939
  • 3
  • 24
  • 39

1 Answers1

2

Does an implicit type conversion to signed char occur before the expression is evaluated?

No, a conversion to int occurs before the expression x - y is evaluated¹. Thus the result can be negative.

Is there a situation (for example, if the unsigned values were large enough that they could not be represented by a signed value) where the above code would not work?

If sizeof int == 1, the integer promotion would promote the unsigned chars to unsigned ints, and that could produce wrong results because before the modulus by w, a modulus by UINT_MAX + 1 is performed due to unsigned arithmetic.

¹ The default integer promotion.

Daniel Fischer
  • 181,706
  • 17
  • 308
  • 431
  • C99 (sec 5.2.4.2.1, at least of the n1124 draft), requires `INT_MIN` to be less than or equal to -32767, and `INT_MAX` to be greater than or equal to 32767. So, at least in C99, `sizeof(int) >= 2`. – sfstewman Jul 20 '12 at 00:39
  • @sfstewman `CHAR_BIT` can be greater than 8. If `CHAR_BIT` is 16 or larger, `sizeof(int)` can be 1. Unlikely, but possible. See [here](http://stackoverflow.com/questions/2098149/what-platforms-have-something-other-than-8-bit-char) – Daniel Fischer Jul 20 '12 at 00:53
  • That's fascinating. Thanks for the link. (Just answered the question that I asked in the previous version of this comment.) – sfstewman Jul 20 '12 at 02:40
  • 1
    @sfstewman In C, byte is the unit of storage a `char` takes. So a byte can be 32 bits, or 9, anything not smaller than 8. `sizeof(type)` gives the size in units of `char`, so by definition `sizeof(char)` is 1. Since the minimal specification requires `int` to have at least 16 bits, for `CHAR_BIT >= 16`, it is possible that `sizeof(int) == 1`. But that's pretty much a theoretical scenario, 8-bit `char`s are nowadays the rule, I don't think any hardware/OS newer than 25 years has a different `char` size. But the standard allows it. – Daniel Fischer Jul 20 '12 at 02:54