5

I can compile and run a program that assigns a long int literal, albeit it one that would fit into an int, to an int variable.

$ cat assign-long-to-int.c
#include <stdio.h>

int main(void){
  int i = 1234L;        //assign long to an int
  printf("i: %d\n", i);
  return 0;
}
$ gcc assign-long-to-int.c -o assign-long-to-int
$ ./assign-long-to-int 
i: 1234

I know that 1234 would fit into an int but would still expect to be able to enable a warning. I've been through all the gcc options but can't find anything suitable.

Is it possible to generate a warning for this situation? From the discussion here, and the gcc options, the short answer is no. It isn't possible.

Would there be any point in such a warning? It's obvious in the trivial example I posted that 1234L is being assigned to an int variable, and that it will fit. However, what if the declaration and the assignment were separated by many lines of code? The programmer writing 1234L is signaling that they expect this literal integer to be assigned to a long. Otherwise, what's the point of appending the L?

In some situations, appending the L does make a difference. For example

$ cat sizeof-test.c 
#include <stdio.h>
void main(void){
  printf("%ld\n", sizeof(1234));
  printf("%ld\n", sizeof(1234L));
}
$ ./sizeof-test 
4
8

Although the compiler must know that 1234L would fit into a 4 byte int, it puts it into an 8 byte long.

$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:hsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-17ubuntu1~20.04' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)
twisted
  • 742
  • 2
  • 11
  • 19
  • Compiler sees that it fits. If you increase the number you will get the expected warning: https://godbolt.org/z/PEscnjdvG – mch Dec 01 '21 at 14:27
  • Don't know about gcc but MS VC issues a warning when it won't fit, but not when it does. Perhaps gcc has a higher warning level available like `-Wall` – Weather Vane Dec 01 '21 at 14:28
  • @WeatherVane: `-Wall` and even `-Wextra` do not cause a warning to be emitted for this code. – Fred Larson Dec 01 '21 at 14:30
  • Similary MSVC issues a warning for `float f = 1.2;` but not for `float f = 1.25;` or for `float f = 1.2f;`. – Weather Vane Dec 01 '21 at 14:33
  • Short answer is no, it isn't possible. Hard to pick any one response here as being 'the right answer', so I've instead marked all as useful. – twisted Dec 03 '21 at 14:51

3 Answers3

5

Compilers should check the value range, not the type of the integer constant. Otherwise we would end up with a lot of whining whenever we initialize a small integer type, since there are no small integer constants smaller than int.

short i = 32768; does for example yield a warning with clang -Wconstant-conversion but not with gcc. There's -Wconversion but it's prone to false positives on either compiler.

If you want to guard against implicit conversions between various integer types, you should probably use a static analyser instead.

Lundin
  • 195,001
  • 40
  • 254
  • 396
4

In the case of constants, the compiler can see that the value in question fits into the type being assigned to, so there's really no point in warning. If the constant was out of range, i.e. 5000000000L, then the compiler will see that and generate a warning.

What the compiler can do however is warn when an integer type that is not a compile type constant is assigned to a lower type:

long y = 1;
int x = y;

If you add the -Wconversion flag (not included in either -Wall or -Wextra), you'll get this warning:

x1.c:6:5: warning: conversion to ‘int’ from ‘long int’ may alter its value [-Wconversion]
     int x = y;
dbush
  • 205,898
  • 23
  • 218
  • 273
1

The compiler will automatically convert between most primitive integer types. When you convert from a larger type to a smaller type, I'm pretty sure its a feature of the C language that the number will be truncated.

For example, the following code will print "0xef":

#include <stdio.h>
#include <stdint.h>
int main() {
  uint32_t x = 0xdeadbeef;
  uint8_t y = x;
  printf("0x%x\n", y);
  return 0;
}

To address your question specifically, I don't think there is a warning for this behavior, because this conversion is technically a defined feature of the C language.

squidwardsface
  • 389
  • 1
  • 7
  • It's more of a language bug than a feature. It is probably not very obvious to anyone why `short i = 32768;` should result in `i` getting the value `-32768`. Or why `int32_t i = 0x7eadbeef;` has well-defined behavior but `int32_t i = 0xdeadbeef;` results in `i` turning negative. This is one of the most broken parts of the C language. – Lundin Dec 01 '21 at 15:02
  • @Lundin These examples are not a shortfall of C... It may not be 'beginner-friendly' to expect the programmer to understand concepts like 2's complemenet representations or arithmetic/logical shifting, but this behavior is perfectly in line with C's "trust the programmer" philosophy. That being said, I do think this is a reasonable feature to expect from GCC, namely catching sign-flipping when assigning from a literal to an integer. – squidwardsface Dec 01 '21 at 17:41
  • You say in this answer that larger numbers should get truncated. And now you say that they should not be truncated but overflow as per 2's complement - a signedness format which is not guaranteed by the standard. Similarly to arithmetic/logical shifting, not sure why you brought it up, but it is not guaranteed by the standard either. The C standard does however guarantee that `0x7eadbeef` is of type `signed int` but `0xdeadbeef` of `unsigned int`. Makes perfect sense right? --> – Lundin Dec 01 '21 at 19:39
  • So what exactly should C trust the programmer to do? Trust the programmer to make mistakes because they thought the C type system was consistent and rational? Trust the programmer to write unspecified non-portable code not covered by the C standard? No, the way C deals with integer types overall is completely broken, no mistake about it. – Lundin Dec 01 '21 at 19:39
  • Speaking of poorly defined broken features, printing an `uint8_t` with `%x` is undefined behavior too. It gets implicit converted by the default argument promotions to `int` (how is that making sense btw?) but `%x` expects an `unsigned int`. What your code here will do is not at all defined by the C language. – Lundin Dec 01 '21 at 19:43
  • Thankfully, Dennis Ritchie and Ken Thompson recognized these potential ambiguities in integer math, and blessed us with the ability to explicitly cast integer types. – squidwardsface Dec 02 '21 at 02:46
  • No they certainly didn't. Ancient pre-standard C only had type `int`, which is largely the reason why standardized C with more types turned out flawed. It is in fact the most likely reason why the flawed integer promotion rules are there, or why no small integer constants exist. – Lundin Dec 02 '21 at 07:03