9

The stdint.h header at my company reads:

#define INT64_MIN -9223372036854775808LL

But in some code in my project, a programmer wrote:

#undef INT64_MIN
#define INT64_MIN (-9223372036854775807LL -1)

He then uses this definition in the code.
The project compiles with no warnings/errors.
When I attempted to remove his definition and use the default one, I got:

error: integer constant is so large that it is unsigned

The two definitions appear to be equivalent.
Why does one compile fine and the other fails?

abelenky
  • 63,815
  • 23
  • 109
  • 159
  • 4
    Isn't it a bit scary that a #define from a standard header file causes an error? – fvu Jun 29 '12 at 23:21
  • 1
    @fvu: that's OK, it turns out the problem is just that abelenky's employer has replaced the standard header with their own broken version (see comments below). – Steve Jessop Jun 30 '12 at 01:55
  • @SteveJessop I saw that in the discussion Looks like a pretty hare brained idea to me. – fvu Jun 30 '12 at 08:40
  • 1
    Does this answer your question? [Why do we define INT\_MIN as -INT\_MAX - 1?](https://stackoverflow.com/questions/26003893/why-do-we-define-int-min-as-int-max-1) – phuclv Apr 30 '21 at 00:38

1 Answers1

22

-9223372036854775808LL is not a single literal. It's an expression consisting of a unary - operator applied to the constant 9223372036854775808LL.

That constant is (barely) outside the range of type long long, which causes the warning. (I'm assuming long long is 64 bits, which it almost certainly is.)

The expression (-9223372036854775807LL -1), on the other hand, contains literals that are within the range of long long, and is an equally a more valid definition for INT64_MIN, since it's of the correct type (as Steve Jessop points out in a comment).

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
  • Good explanation. Does that mean that the definition in stdint.h is wrong, as it uses a value outside the LL range? – abelenky Jun 29 '12 at 23:26
  • 1
    @abelenky: Not necessarily. As long as it gives the correct behavior *for the compiler it's meant to be used with*, it's ok. Even if it produces a spurious warning, that doesn't make it non-conforming -- though I'd still consider that a bug. Note that `` on my system uses the `-1` trick. What system are you on, and where did your `` come from? – Keith Thompson Jun 29 '12 at 23:28
  • I'm on a 64-bit Linux system. The stdint.h comes from the company source-repository. I tend to assume it has some Unix background, but don't know the details of its history. – abelenky Jun 29 '12 at 23:31
  • 4
    @abelenky: Why does your company provide its own ``? It ought to be provided by your compiler or runtime library. On any Linux system, it's provided by glibc. – Keith Thompson Jun 29 '12 at 23:40
  • Why does my company do most of the things it does? "Thats like asking the square root of a million; no one will ever know" -Nelson Muntz. Its just the way our code is, and I'm not the boss. – abelenky Jun 29 '12 at 23:43
  • 4
    @Keith: 7.18.2 of C99 says of `INT64_MIN`, "this expression shall have the same type as would an expression that is an object of the corresponding type converted according to the integer promotions". So your definition is more valid than the questioner's employer's definition, not equally valid: the type of `INT64_MIN` *must* be `int64_t`, not `uint64_t`. An expression with the wrong type cannot have the correct behavior for the compiler it's meant to be used with, in the case someone writes `if (INT64_MIN > 0) ...`. But on many/most compilers, `((long long)-922 etc)` would be OK. – Steve Jessop Jun 30 '12 at 01:48