Note that those "definitions" of int8_t
, intptr_t
, etc., are simply aliases for built-in types.
The basic data types char
, int
, long
, double
, etc., are all defined internally to the compiler - they're not defined in any header file. Their minimum ranges are specified in the language standard (a non-official, pre-publication draft is available here).
The header file <limits.h>
will show the ranges for different integer types for the particular implemention; here's an excerpt from the implementation I'm using:
/* Number of bits in a `char'. */
# define CHAR_BIT 8
/* Minimum and maximum values a `signed char' can hold. */
# define SCHAR_MIN (-128)
# define SCHAR_MAX 127
/* Maximum value an `unsigned char' can hold. (Minimum is 0.) */
# define UCHAR_MAX 255
/* Minimum and maximum values a `char' can hold. */
# ifdef __CHAR_UNSIGNED__
# define CHAR_MIN 0
# define CHAR_MAX UCHAR_MAX
# else
# define CHAR_MIN SCHAR_MIN
# define CHAR_MAX SCHAR_MAX
# endif
/* Minimum and maximum values a `signed short int' can hold. */
# define SHRT_MIN (-32768)
# define SHRT_MAX 32767
/* Maximum value an `unsigned short int' can hold. (Minimum is 0.) */
# define USHRT_MAX 65535
/* Minimum and maximum values a `signed int' can hold. */
# define INT_MIN (-INT_MAX - 1)
# define INT_MAX 2147483647
/* Maximum value an `unsigned int' can hold. (Minimum is 0.) */
# define UINT_MAX 4294967295U
/* Minimum and maximum values a `signed long int' can hold. */
# if __WORDSIZE == 64
# define LONG_MAX 9223372036854775807L
# else
# define LONG_MAX 2147483647L
# endif
Again, this doesn't define the types for the compiler, this is just informational; you can use these macros to guard against overflow, for example. There's a <float.h>
header that does something similar for floating-point types.
The char
type must be able to represent at least every value in the basic execution character set - upper and lower case Latin alphabet, all decimal digits, common punctuation characters, and control characters (newline, form feed, carriage return, tab, etc.). char
must be at least 8 bits wide, but may be wider on some systems. There's some weirdness regarding the signedness of char
- the members of the basic execution character set are guaranteed to be non-negative ([0...127]
), but additional characters may have positive or negative values, so "plain" char
may have the same range as either signed char
or unsigned char
. It depends on the implementation.
The int
type must be able to represent values in at least the range [-32767...32767]
. The exact range is left up to the implementation, depending on word size and signed integer representation.
C is a product of the early 1970s, and at the time there was a lot of variety in byte and word sizes - historically, bytes could be anywhere from 7 to 9 bits wide, words could be 16 to 18 bits wide, etc. Powers of two are convenient, but not magical. Similarly, there are multiple representations for signed integers (2's complement, 1's complement, sign magnitude, etc.). So the language definition specifies the minimum requirements, and it's up to the implementor to map those onto the target platform.