3

Using the following macro:

#define MIN_SWORD (signed int) 0x8000

In e.g. the following expression:

signed long s32;
if (s32 < (signed long)MIN_SWORD)...

is expected to do the following check:

if (s32 < -32768)

One some compilers it seems to work fine. But on some other compiler the exprssion is evaluated as:

if (s32 < 32768)

My question: How is a ANSI-C compiler supposed to evaluate the following expression: (signed long) (signed int) 0x8000?

It seems that on some compilers the cast to `(signed int) does not cause the (expected) conversion from the positive constant 0x8000 to the minimum negative value of a signed int, if afterwards the expression is casted to the wider type of signed long. In other words, the evaluated constant is not equivalent to: -32768L (but 32768L)

Is this behavior maybe undefined by ANSI-C?

Hasturkun
  • 35,395
  • 6
  • 71
  • 104
Oliver
  • 31
  • 1
  • I forgot to mention that the probelm is related to an embedded target with 16-bit int. – Oliver Feb 08 '11 at 14:42
  • Tangential comment: You shouldn't need to generate your own macros for limits, as they should all be in `limits.h` already. – Oliver Charlesworth Feb 08 '11 at 15:01
  • 1
    Tangential comment #2: It's unwise to be using `int`, `long` etc., especially for embedded work. I would recommend using `uint16_t`, `uint32_t` typedefs (can usually be found in `stdint.h`, to make it explicit what size you expect each type to be. – Oliver Charlesworth Feb 08 '11 at 15:09
  • Aside from limits.h, if you want -32768, why not just say that instead of (signed int)0x8000? – Jim Balter Feb 08 '11 at 15:21
  • similar questions: http://stackoverflow.com/questions/14695118/2147483648-0-returns-true-in-c http://stackoverflow.com/questions/12620753/why-it-is-different-between-2147483648-and-int-2147483648 – phuclv Jan 27 '15 at 06:45
  • [-32768 not fitting into a 16 bit signed value](http://stackoverflow.com/questions/26375337/32768-not-fitting-into-a-16-bit-signed-value?lq=1) – phuclv Jan 27 '15 at 06:46

1 Answers1

2

If an int is 16-bit on your platform, then the type of 0x8000 is unsigned int (see 6.4.4 p.5 of the standard). Converting to a signed int is implementation-defined if the value cannot be represented (see 6.3.1.3 p.3). So the behaviour of your code is implementation-defined.

Having said that, in practice, I would've assumed that this should always do what you "expect". What compiler is this?

Oliver Charlesworth
  • 267,707
  • 33
  • 569
  • 680
  • Fortunately, I figured meanwhile out that the reason for the strange behavior was actually an error I made at integration of the embedded code as a s-fcuntion: The type we used on the embedded target for 16-bit was based on "int". That's why it works fine on the target. But the "int" is compiled as s-function in MatLab/Simulink as 32-bit on the PC. Sorry for the confusion! I do also agree that defining the lower limit should be based on explicitely stating a negative number rather than on an "overflow" behavior caused by the cast. And yes, the usage of a std- limit.h is much better also. – Oliver Feb 09 '11 at 16:32