-5

Why is it compiled without errors? What am I doing wrong?

#include <stdio.h>

int main (){
    int n1 = 90, n2 = 93, n3 = 95;
    int i = 2147483647;
    int ii = 2147483646;
    int iii = 2147483650;
    char c1[50] = {'\0'};
    char c2[50] = {'\0'};
    char c3[50] = {'\0'};

    n1 = sprintf(c1, "%d", i+i);
    n2 = sprintf(c2, "%d", ii);
    n3 = sprintf(c3, "%d", iii);
    printf("n1 = %d, n2 = %d, n3 = %d\n  i = |%s| \n ii = |%s|\niii = |%s|\n", n1, n2, n3, c1, c2, c3);
        return 0;
}
gcc filename -Wall -Wextra -Werror

I guess %d can't be more than int, but it's compiled and as a result:

n1 = 2, n2 = 10, n3 = 11

  i = |-2|

 ii = |2147483646|

iii = |-2147483646|

I was expecting a GCC error.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
akaCroc
  • 83
  • 4
  • 5
    What error(s) were you expecting and why? – Andrew Henle Feb 08 '23 at 16:18
  • 3
    Signed integer overflow is 'undefined behavior' in C. Which means it is ok for the compiler to not print an error and crash at runtime, developer should make sure they don't occur. see: https://stackoverflow.com/questions/67755339/is-signed-integer-overflow-undefined-behaviour-or-implementation-defined – balki Feb 08 '23 at 16:21
  • 1
    `I guess %d can't be more than int` Well, `%d` always exactly takes an `int` value. And you are only providing `int` values as all your variables are of type `int`. – Gerhardh Feb 08 '23 at 16:23
  • 1
    if you want gcc to get a runtime error from overflow in the signed addition you need to add `-ftrapv` – PeterT Feb 08 '23 at 16:23
  • 24
    I tried to modify the title to make it more specific to the problem being asked. The general title applies to any number of things, and so makes the question less useful to future users of StackOverflow. – jxh Feb 08 '23 at 17:23
  • 12
    This question is being discussed on meta. https://meta.stackoverflow.com/q/423098/4014959 – PM 2Ring Feb 08 '23 at 19:29

2 Answers2

12

There is an error in your initialization to iii, where the constant provided does not fit in int.

GCC will diagnose this issue if you enable -pedantic. From the documentation:

-Wpedantic
-pedantic
Issue all the warnings demanded by strict ISO C and ISO C++; reject all programs that use forbidden extensions, and some other programs that do not follow ISO C and ISO C++. For ISO C, follows the version of the ISO C standard specified by any -std option used.

When doing so, I get the error:

.code.tio.c: In function ‘main’:
.code.tio.c:7:15: error: overflow in conversion from ‘long int’ to ‘int’ changes value from ‘2147483650’ to ‘-2147483646’ [-Werror=overflow]
     int iii = 2147483650;
               ^~~~~~~~~~
cc1: all warnings being treated as errors

Try it online!


Other problems

Arithmetic leading to signed integer overflow

The arithmetic operation i+1 triggers signed integer overflow, which has undefined behavior, so the compiler is free to do whatever it wants with that code.

Note that both operands to the + operator have type int, so if any result is generated, it would be an int. However, since signed integer overflow is undefined, no result may be generated (e.g., the program could just halt), or a random result may be generated, or some other overflow behavior may occur that matches your observation.

In the general case, there isn't any way for the compiler to know if any particular operation will actually cause overflow. In this case, static code analysis may have revealed it. I believe GCC does perform some rudimentary static code analysis, but it is not required to identify every instance of undefined behavior.

Using sprintf instead of snprintf

While it is safe to use sprintf in your particular context, it is generally preferable to use snprintf to guard against buffer overflow exploits. snprintf simply needs an extra parameter to indicate the size of the buffer, and it will NUL terminate the string for you.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
jxh
  • 69,070
  • 8
  • 110
  • 193
  • 1
    It's worth noting that `-pedantic` only emits a warning for this. It's being converted to an error in that example only because `-Werror` was also passed, which promotes _all_ warnings to errors. If someone wants to only promote `-pedantic` warnings to errors, there's the flag `-pedantic-errors` – Brian61354270 Feb 08 '23 at 21:30
  • @Brian Nice point. OP was already passing in `-Werror`, though. It would be redundant to add `-pedantic-errors`. – jxh Feb 08 '23 at 21:40
  • why the f isn't `-Werror=overflow` included in `-Wall` ? – hanshenrik Feb 10 '23 at 02:07
  • `int iii = 2147483650;` isn't an overflow as such. It's an implementation-defined conversion from one signed type to another, but otherwise valid C. The compiler is allowed to generate code that raises a signal but it doesn't have to. The relevant part is C17 6.3.1.3 §3 "Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.". Not to be confused with adding with the `+` operator in run-time and going past `INT_MAX` - that is an overflow and undefined behavior. – Lundin Feb 10 '23 at 15:36
  • @Lundin You are right. I was careful not to use overflow for that error in my answer, but I cannot help the warning flag name from GCC. The title was more me taking liberty at guessing what the OP was questioning. – jxh Feb 10 '23 at 16:01
3

I can really see after jxh's excellent answer somebody would still be saying "but whyyy?"

Here's why:

   typedef int HANDLE;
   #define HKEY_LOCAL_MACHINE ((HANDLE)0x80000001)

No, HANDLE isn't int anymore, but it was in 1994. Everybody and their brother depended on signed overflow just working at compile time. If you changed it you broke your platform headers. That didn't happen until the big 64 bit port.

The ancient compilers simply didn't check for constant out of range. They just parsed the constant with something analogous to strtol; the overflow was really a runtime overflow in the compiler itself; without code written to detect it it simply didn't exist.

The static analysis didn't see 0x80000001; it saw -bignum. This used to bite people when cross compiling to different bitnesses; sometimes compile time constants were just wrong. One by one all this stuff got cleaned up, but there were too many places that depended on no warning on overflow (because the last thing you want is warnings in the platform headers), so it was left as is.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Joshua
  • 40,822
  • 8
  • 72
  • 132
  • That explanation is plausible (and I believe it) but you wouldn't happen to have some references would you? Like notes in the C language spec, responses to proposals, schorly articles on C history, etc. – Stephen C Feb 11 '23 at 01:49
  • @StephenC: I lived it. I read the system header files. I encountered wrong compiler output due to compiler bitness not being the same as output bitness. I know what changing the rules would do because it would fail on my old code too. – Joshua Feb 11 '23 at 02:01