As stated in the title, I have come across a very strange thing regarding explicit vs. implicit typecasting on GCC on Linux.
I have the following simple code to demonstrate the problem:
#include <stdio.h>
#include <stdint.h>
int main(void)
{
int i;
uint32_t e1 = 0;
uint32_t e2 = 0;
const float p = 27.7777;
printf("# e1 (unsigned) e1 (signed) e2 (unsigned) e2 (signed)\n");
for (i = 0; i < 10; i++) {
printf("%d %13u %11d %13u %11d\n", i, e1, e1, e2, e2);
e1 -= (int)p;
e2 -= p;
}
return 0;
}
As you can see e1
is decremented by p
explicitly typecasted to an int
, while e2
is decremented by p
implicitly typecasted.
I expected that e1
and e2
would contain the same value, but they do not... And in fact, it looks like the result is system dependent.
For testing the code, I have two virtual machines (VirtualBox started with Vagrant). Here is the first machine:
vagrant@vagrant:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
vagrant@vagrant:~$ uname -a
Linux vagrant 4.4.0-92-generic #115-Ubuntu SMP Thu Aug 10 16:02:55 UTC 2017 i686 i686 i686 GNU/Linux
vagrant@vagrant:~$ gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
To build and execute, I use the following
vagrant@vagrant:~$ gcc -Wall /vagrant/test.c
vagrant@vagrant:~$ ./a.out
# e1 (unsigned) e1 (signed) e2 (unsigned) e2 (signed)
0 0 0 0 0
1 4294967269 -27 4294967269 -27
2 4294967242 -54 4294967268 -28
3 4294967215 -81 4294967268 -28
4 4294967188 -108 4294967268 -28
5 4294967161 -135 4294967268 -28
6 4294967134 -162 4294967268 -28
7 4294967107 -189 4294967268 -28
8 4294967080 -216 4294967268 -28
9 4294967053 -243 4294967268 -28
vagrant@vagrant:~$
As you can see everything looks fine for e1
which was the one using explicit typecasting, but for e2
the result is quite strange...
Then I try the same on another virtual machine, which is a 64-bit version of Ubuntu:
vagrant@ubuntu-xenial:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
vagrant@ubuntu-xenial:~$ uname -a
Linux ubuntu-xenial 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
vagrant@ubuntu-xenial:~$ gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
And here is the output of the application:
vagrant@ubuntu-xenial:~$ gcc -Wall /vagrant/test.c
vagrant@ubuntu-xenial:~$ ./a.out
# e1 (unsigned) e1 (signed) e2 (unsigned) e2 (signed)
0 0 0 0 0
1 4294967269 -27 4294967269 -27
2 4294967242 -54 0 0
3 4294967215 -81 4294967269 -27
4 4294967188 -108 0 0
5 4294967161 -135 4294967269 -27
6 4294967134 -162 0 0
7 4294967107 -189 4294967269 -27
8 4294967080 -216 0 0
9 4294967053 -243 4294967269 -27
The values of e2
are still not what I expected, but now different from what I had on my 32-bit system.
I have no idea if this difference is caused by the 32-bits vs. 64-bits, or if it is related to something else.
However, I would like to understand why there is a difference on e1
and e2
, and if it is possible to at least get a warning from GCC when this happens.
Thanks :-)