62

A number of compilers provide 128-bit integer types, but none of the ones I've used provide the typedefs int128_t. Why?

As far as I recall, the standard

  • Reserves int128_t for this purpose
  • Encourages implementations that provide such a type to provide the typedef
  • Mandates that such implementations provide an intmax_t of at least 128 bits

(and, I do not believe I've used an implementation that actually conforms to that last point)

  • What platform are you on? (x86-64?) – Cameron Apr 14 '15 at 22:52
  • 2
    The standard nowhere mandates that `__int128` must be treated as an "extended integer type". – T.C. Apr 14 '15 at 22:53
  • 1
    Which language are you using? I'm pretty sure the C++ standard does not say `intmax_t` shall be at least 128 bits long, and I doubt the C standard does either. – Brian Bi Apr 14 '15 at 22:54
  • @Cameron: That's the one I'm most used to, but not the only one I've used. –  Apr 14 '15 at 22:54
  • @Brian: I recall it being implied by the existence of a 128-bit integral type, but I don't have the text handy to check the wording. But then, T.C.'s comment probably addresses that adequately. –  Apr 14 '15 at 22:55
  • 4
    @Hurkyl: right, the fact that the implementation provides a thing called `__int128` that behaves like an integer, doesn't mean that is "really is" one in the sense that `intmax_t` cares about. On the other hand, if the implementation provided `int128_t`, *then* `intmax_t` would have to be at least that big. So one possible explanation is that the implementations don't want the type `intmax_t` to change when compiler-specific extensions are disabled, but I have no idea whether that's the real reason or not. – Steve Jessop Apr 14 '15 at 22:57
  • The sample `std::numeric_limits::max_exponent` is 128, but other than that `128` is not in the C++14 spec text. (It is in table numbers, page numbers, and note numbers) – Mooing Duck Apr 14 '15 at 22:57
  • 1
    *If* the compiler provides an extended integer type, then `intmax_t` (or `uintmax_t`, as the case may be) must be able to represent it. But the compiler can also provide a type that isn't an extended integer type, but behaves similar to the integer types. And changing what `intmax_t` is will break ABI, so no compiler actually treat `__int128` as an extended integer type. – T.C. Apr 14 '15 at 22:59
  • @T.C.: What ABI imposes requirements on `intmax_t`? (And why?) – Keith Thompson Apr 14 '15 at 23:05
  • @KeithThompson See [Note 5 in Clang's C++11 feature table](http://clang.llvm.org/cxx_status.html). I believe commonly cited examples include library functions that accept `intmax_t` arguments (e.g., `imaxabs`), `printf/scanf`'s `%jd`, etc. – T.C. Apr 14 '15 at 23:14
  • 4
    @T.C.: That's unfortunate. In my opinion it defeats the purpose of `intmax_t`. C code shouldn't depend on it having a particular width. – Keith Thompson Apr 14 '15 at 23:19
  • @T.C.: Looking into this again, the note you cited says that "changing `intmax_t` would be an ABI-incompatible change". But at least `http://www.x86-64.org/documentation/abi.pdf` doesn't mention `intmax_t`. – Keith Thompson Apr 26 '15 at 20:43
  • @T.C.: I've posted a new question: http://stackoverflow.com/q/29927562/827263 – Keith Thompson Apr 28 '15 at 18:52
  • I think it is more insightful to think for `int256_t` after 64-bit. – i486 Oct 26 '21 at 21:11

1 Answers1

32

I'll refer to the C standard; I think the C++ standard inherits the rules for <stdint.h> / <cstdint> from C.

I know that gcc implements 128-bit signed and unsigned integers, with the names __int128 and unsigned __int128 (__int128 is an implementation-defined keyword) on some platforms.

Even for an implementation that provides a standard 128-bit type, the standard does not require int128_t or uint128_t to be defined. Quoting section 7.20.1.1 of the N1570 draft of the C standard:

These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement representation, it shall define the corresponding typedef names.

C permits implementations to defined extended integer types whose names are implementation-defined keywords. gcc's __int128 and unsigned __int128 are very similar to extended integer types as defined by the standard -- but gcc doesn't treat them that way. Instead, it treats them as a language extension.

In particular, if __int128 and unsigned __int128 were extended integer types, then gcc would be required to define intmax_t and uintmax_t as those types (or as some types at least 128 bits wide). It does not do so; instead, intmax_t and uintmax_t are only 64 bits.

This is, in my opinion, unfortunate, but I don't believe it makes gcc non-conforming. No portable program can depend on the existence of __int128, or on any integer type wider than 64 bits. And changing intmax_t and uintmax_t would cause serious ABI compatibility problems.

Deduplicator
  • 44,692
  • 7
  • 66
  • 118
Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
  • 1
    To add my two cents, `__int128` is mentioned is subclause `J.5.6` (Other arithmetic types), so it may be likely treated as compiler's extension. This is convergent to [GCC's documentation](https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html), namely: `GCC does not support any extended integer types. `. – Grzegorz Szpetkowski Apr 14 '15 at 23:08
  • 1
    @GrzegorzSzpetkowski: That's a rewording of a similar sentence in C90: "Other arithmetic types, such as long long int, and their appropriate conversions are defined"; it predates C99 and the introduction of extended integer types. (It's in the "Common extensions" part of the "Portability issues" appendix.) I find it odd that both the standard and gcc support extensions that add new integer types, but don't use the "extended integer type" mechanism. – Keith Thompson Apr 14 '15 at 23:15
  • 2
    I believe that Standard Committee let the choice to the implementers. This would be convergent to other "looseness" like optional VLAs. – Grzegorz Szpetkowski Apr 14 '15 at 23:24
  • 1
    It a compiler which added 128-bit types were to expand `intmax_t`, then any code which used that type and was compiled after the type was expanded would be unable to link with code that was compiled before the type was expanded. In many contexts it's very important to be able to have new code link properly with older code that may have been compiled years ago, and for which source may not always be available. – supercat Apr 26 '16 at 18:19
  • 1
    @supercat: It's also important to follow the requirements of the standard, which says what `intmax_t` is. – Keith Thompson Apr 26 '16 at 18:27
  • 1
    @KeithThompson: To the extent that the Standard would prevent a compiler from being useful, it should be ignored. Given that one could have a 100% standards-compliant implementation which didn't even look at the source file once it came out of the preprocessor, I'd gladly take a useful non-compliant compiler over a compiler that was "100% standards-compilant" but useless, any day. – supercat Apr 26 '16 at 18:39
  • 1
    @supercat: To the extent that an implementation defines `intmax_t` as something other than "a signed integer type capable of representing any value of any signed integer type", it is both non-conforming and less useful than it would be if it defined `intmax_t` correctly. I acknowledge that linking with old code can be an issue. Surely there are ways to work around that other than breaking the clear meaning of `intmax_t`. – Keith Thompson Apr 26 '16 at 18:41
  • 1
    @KeithThompson: One might want a better workaround to exist, but that doesn't mean one does. A lot of issues could have been avoided during the 16- to 32-bit transition, and could be avoided in with transitions to larger types, if there were a directive that would say that within a block of code, a built-in type should be treated as a certain size if the compiler is capable of supporting that. Data interchange between contexts that use different sizes for an integer type would need to be done via types that have the same size in both. – supercat Apr 26 '16 at 18:53
  • 1
    @supercat: I am unwilling to destroy the usefulness of `intmax_t` for the sake of backward compatibility. If you think it should mean something other than what it means, take it up with the C standard committee. – Keith Thompson Apr 26 '16 at 18:54
  • 1
    @KeithThompson: Are unsuffixed integer literals greater than 18446744073709551615 accepted? I would have some objections to a compiler in which uintmax_t was not large enough to accommodate all unsuffixed integer literals, but if there is no way to create something of type __int128_t except by coercing things to that type or using a suffix defined as an extension, I see no reason why the existence of those types should break anything – supercat Apr 26 '16 at 19:10
  • 1
    If `__int128_t` is not treated by the compiler as an extended integer type, then `intmax_t` can legally be 64 bits. That's a workaround that conforms to the C standard. (I don't know whether huge unsuffixed integer constants are accepted; I presume you can check that as easily as I can.) I personally would prefer for `__int128_t` to be an extended integer type, and for `intmax_t` to be redefined accordingly. A command-line option that chooses between that and the current behavior would IMHO be reasonable. – Keith Thompson Apr 26 '16 at 19:12