The C99 Standard differentiate between implicit and explicit type conversions (6.3 Conversions). I guess, but could not found, that implicit casts are performed, when the target type is of greater precision than the source, and can represent its value. [That is what I consider to happen from INT to DOUBLE]. Given that, I look at the following example:
#include <stdio.h> // printf
#include <limits.h> // for INT_MIN
#include <stdint.h> // for endianess
#define IS_BIG_ENDIAN (*(uint16_t *)"\0\xff" < 0x100)
int main()
{
printf("sizeof(int): %lu\n", sizeof(int));
printf("sizeof(float): %lu\n", sizeof(float));
printf("sizeof(double): %lu\n", sizeof(double));
printf( IS_BIG_ENDIAN == 1 ? "Big" : "Little" ); printf( " Endian\n" );
int a = INT_MIN;
printf("INT_MIN: %i\n", a);
printf("INT_MIN as double (or float?): %e\n", a);
}
I was very surprised to find that output:
sizeof(int): 4
sizeof(float): 4
sizeof(double): 8
Little Endian
INT_MIN: -2147483648
INT_MIN as double (or float?): 6.916919e-323
So the float value printed is a subnormal floating point number near the very minimal subnormal positive double 4.9406564584124654 × 10^−324. Strange things happen when I comment out the two printf for endianess, I get another value for the double:
#include <stdio.h> // printf
#include <limits.h> // for INT_MIN
#include <stdint.h> // for endianess
#define IS_BIG_ENDIAN (*(uint16_t *)"\0\xff" < 0x100)
int main()
{
printf("sizeof(int): %lu\n", sizeof(int));
printf("sizeof(float): %lu\n", sizeof(float));
printf("sizeof(double): %lu\n", sizeof(double));
// printf( IS_BIG_ENDIAN == 1 ? "Big" : "Little" ); printf( " Endian\n" );
int a = INT_MIN;
printf("INT_MIN: %i\n", a);
printf("INT_MIN as double (or float?): %e\n", a);
}
output:
sizeof(int): 4
sizeof(float): 4
sizeof(double): 8
INT_MIN: -2147483648
INT_MIN as double (or float?): 4.940656e-324
- gcc --version: (Ubuntu 4.8.2-19ubuntu1) 4.8.2
- uname: x86_64 GNU/Linux
- compiler options where: gcc -o x x.c -Wall -Wextra -std=c99 --pedantic
- And yes there where one warning:
x.c: In function ‘main’:
x.c:15:3: warning: format ‘%e’ expects argument of type ‘double’, but argument 2
has type ‘int’ [-Wformat=]
printf("INT_MIN as double (or float?): %e\n", a);
^
But I still cannot understand what exactly is happening.
- in little endianess I consider MIN_INT as: 00...0001 and MIN_DBL (Subnormal) as 100..00#, starting with the mantissa, followed by the exponent and conclude with the
#
as sign bit. - Is this form of applying "%e" format specifier on an int, is a implicit cast?, a reinterpret cast?
I am lost, please enlight me.