My question is pretty simple: as std::intmax_t
is defined as the maximum width integer type
according to cppreference, why it does not correspond to __int128_t
in GCC?

- 57,703
- 61
- 205
- 388
-
As that's implementation defined you'd have to ask the GCC developer list, but I would imagine it's because the platform does not support 128bit ints natively. – Mgetz Jan 21 '14 at 17:49
-
I would guess it's because __int128_t is not a standard type (it's not listed at your link anyway). – Elliott Frisch Jan 21 '14 at 17:51
-
language_lawyer, anyone? – Ankur S Jul 12 '18 at 20:15
3 Answers
I believe this is a violation of the C and C++ standards -- either that, or gcc doesn't consider __int128_t
to be an integer type.
The C standard (both the 1999 and 2011 editions) doesn't require intmax_t
to be one of the standard types; it's required to be "a signed integer type capable of representing any value of any signed integer type". In particular, it can be an extended integer type -- and if there is a 128-bit extended integer type, then intmax_t
must be at least 128 bits wide.
The C standard even suggests using implementation-defined keywords that "have the form of an identifier reserved for any use" as the names of extended integer types -- such as __int128_t
.
The 2011 C++ standard adopts C99's extended integer types feature, and defers to the 1999 C standard for the definition of intmax_t
and <stdint.h>
.
So if __int128_t
is an integer type within the meaning defined by the standard (which it certainly can be), and is, as the name implies, 128 bits wide, then intmax_t
must be at least 128 bits wide.
As Stephen Canon's answer, changing intmax_t
does require some work. The C and C++ standards do not recognize that as a justification for defining intmax_t
incorrectly.
Of course all of this applies equally to uintmax_t
.
#include <stdio.h>
#include <stdint.h>
int main(void) {
__uint128_t huge = UINTMAX_MAX;
huge ++;
if (huge > UINTMAX_MAX) {
puts("This should not happen");
}
}
On my system (Linux x86_64, gcc 4.7.2), the above program prints:
This should not happen
If gcc conforms to the standard, then that should be possible only if __int128_t
is not an integer type -- but quoting the gcc 4.8.2 manual (emphasis added):
As an extension the integer scalar type
__int128
is supported for targets which have an integer mode wide enough to hold 128 bits. Simply write__int128
for a signed 128-bit integer, orunsigned __int128
for an unsigned 128-bit integer. There is no support in GCC for expressing an integer constant of type__int128
for targets withlong long
integer less than 128 bits wide.
I suppose one could argue that the "as an extension" phrase lets gcc off the hook here, justifying the existence of __int128_t
under section 4 paragraph 6 of the standard:
A conforming implementation may have extensions (including additional library functions), provided they do not alter the behavior of any strictly conforming program.
rather than under section 6.2.6 paragraph 4:
There may also be implementation-defined extended signed integer types.
(I personally think that making intmax_t
at least as wide as __int128_t
, if it exists, would be more in keeping with the intent of the standard, even if it's (barely) possible to argue that it doesn't violate the letter of the standard.)

- 1
- 1

- 254,901
- 44
- 429
- 631
-
2An especially determined language lawyer might argue that the notion of “extended integer type” is never formally defined by the standard; so long as GCC doesn’t say that `__int128_t` is such a type, and so long as they don’t interpret literals as having that type, there is no standard violation. A more reasonable pragmatist might argue that a compiler should warn that you are using an extension, but support the type anyway as it is genuinely useful. – Stephen Canon Jan 21 '14 at 20:21
-
2@StephenCanon: Thus the caveat: "either that, or gcc doesn't consider `__int128_t` to be an integer type". Still, I think that making `intmax_t` a typedef for `__int128_t` would be more in the spirit (as well as the letter) of the standard than making `intmax_t` smaller than `__int128_t`. – Keith Thompson Jan 21 '14 at 20:38
-
Agreed. However, that turns out to require support from a decent number of components outside of a compiler, which are extremely resistant to any API breaking change. While not impossible it is quite unlikely to happen anytime soon; supporting `__int128_t` as a not-quite-conformant extension is a reasonable compromise. (I would argue that it should produce an error under `-std=c11`, and a compatibility warning under `-std=gnu11` or equivalent). – Stephen Canon Jan 21 '14 at 20:41
-
@StephenCanon: I've updated my answer; see the last few paragraphs. – Keith Thompson Jan 21 '14 at 20:47
-
pity but [GCC does not support any extended integer types.](https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html) – jfs Oct 02 '14 at 10:19
Changing intmax_t
requires not only changes to a compiler but also to numerous standard library functions that need to accept intmax_t
arguments (and platform ABIs may define intmax_t
as well). A compiler can unilaterally provide __int128_t
as an extension, but it cannot unilaterally change the type intmax_t
. That requires support from all of the standard library implementations that the compiler targets.

- 103,815
- 19
- 183
- 269
-
1And the psABI for the architecture, which is not going to change in such an incompatible way for a rarely-useful purpose. – R.. GitHub STOP HELPING ICE Jan 21 '14 at 17:56
-
2As an addition, also the preprocessing phase has to take that into account. It is supposed to decode constants of that type and to have all its expressions evaluated in `[u]intmax_t`. – Jens Gustedt Jan 21 '14 at 17:59
__int128 is not sufficiently functional to be used as an intmax_t
The stdint.h header must include a define INTMAX_C(9999999999999999999999) This define lets you enter a constant into the C source for any value up to the maximum size of the type.
The GCC documentation says "There is no support in GCC for expressing an integer constant of type __int128 for targets with long long integer less than 128 bits wide."
Therefore, it cannot be used for intmax_t.

- 2,261
- 15
- 15
-
The Standard requires these macros to be constant expressions, not literals. So `#define INTMAX_MAX (((intmax_t)LONGLONG_MAX) << 64) | ULONGLONG_MAX)` seems acceptable. – Ben Voigt Jun 20 '15 at 04:46
-
2The constant expressions must be suitable for `#if` expressions so no casts are allowed. Also this technique would not help with `INTMAX_C` which needs to be able to cope with any integer (within range) the programmer might wish to use. – user3710044 Jun 20 '15 at 04:57