76

I want a 128 bit integer because I want to store results of multiplication of two 64 bit numbers. Is there any such thing in gcc 4.4 and above?

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
MetallicPriest
  • 29,191
  • 52
  • 200
  • 356
  • 3
    Take a look: http://stackoverflow.com/questions/3329541/does-gcc-support-128-bit-int-on-amd64 – Valeri Atamaniouk Apr 18 '13 at 16:28
  • @chux: Why did you reopen this? The top answer here is wrong, claiming that `uint128_t` is defined when in fact gcc provides `unsigned __int128` or `__uint128_t`. And currently only on 64-bit targets where 128-bit only takes 2 integer registers. – Peter Cordes Feb 21 '19 at 18:39
  • 1
    @PeterCordes I VTO as the 2 dupes listed did not answer the question. My VTO was not related to any answers. – chux - Reinstate Monica Feb 22 '19 at 05:25
  • @chux: ok that's fair, but wasn't it closed as a dup of [Does gcc support 128-bit int on amd64?](//stackoverflow.com/q/3329541)? That looks like a duplicate to me. – Peter Cordes Feb 22 '19 at 05:28
  • @PeterCordes This question was closed due to 2 dupes: [question](https://stackoverflow.com/q/3329541) was narrower and so not a dupe of this question - and another. That [answer](https://stackoverflow.com/a/3329615/2410359) addresses gcc in general 4.6 and before, but not this question's about 4.4 onward. Certainly these and many other relates questions are similar and on the border of being sufficiently similar/different – chux - Reinstate Monica Feb 22 '19 at 05:38
  • @chux: This question says "or above", and gcc4.6 is pretty old at this point. (Although admittedly I have seen answers this year with asm output from gcc4.4 on RHEL). Anyway, https://stackoverflow.com/posts/16088282/timeline doesn't show the other (not?) duplicate that this was closed as. The comments were are auto-deleted when the dup close went through, and the close even itself doesn't seem to have recorded the duplicate list. – Peter Cordes Feb 22 '19 at 05:46

3 Answers3

60

For GCC before C23, a primitive 128-bit integer type is only ever available on 64-bit targets, so you need to check for availability even if you have already detected a recent GCC version. In theory gcc could support TImode integers on machines where it would take 4x 32-bit registers to hold one, but I don't think there are any cases where it does.

In C++, consider a library such as boost::multiprecision::int128_t which hopefully uses compiler built-in wide types if available, for zero overhead vs. using your own typedef (like GCC's __int128 or Clang's _BitInt(128)). See also @phuclv's answer on another question.

ISO C23 will let you typedef unsigned _BitInt(128) u128, modeled on clang's feature originally called _ExtInt() which works even on 32-bit machines; see a brief intro to it. Current GCC -std=gnu2x doesn't even support that syntax yet.


GCC 4.6 and later has a __int128 / unsigned __int128 defined as a built-in type. Use
#ifdef __SIZEOF_INT128__ to detect it.

GCC 4.1 and later define __int128_t and __uint128_t as built-in types. (You don't need #include <stdint.h> for these, either. Proof on Godbolt.)

I tested on the Godbolt compiler explorer for the first versions of compilers to support each of these 3 things (on x86-64). Godbolt only goes back to gcc4.1, ICC13, and clang3.0, so I've used <= 4.1 to indicate that the actual first support might have been even earlier.

         legacy               recommended(?)    |  One way of detecting support
        __uint128_t   |  [unsigned]  __int128   |  #ifdef __SIZEOF_INT128__
gcc        <=  4.1    |       4.6               |     4.6
clang      <=  3.0    |       3.1               |     3.3
ICC        <=  13     |     <= 13               |     16.  (Godbolt doesn't have 14 or 15)

If you compile for a 32-bit architecture like ARM, or x86 with -m32, no 128-bit integer type is supported with even the newest version of any of these compilers. So you need to detect support before using, if it's possible for your code to work at all without it.

The only direct CPP macro I'm aware of for detecting it is __SIZEOF_INT128__, but unfortunately some old compiler versions support it without defining it. (And there's no macro for __uint128_t, only the gcc4.6 style unsigned __int128). How to know if __uint128_t is defined

Some people still use ancient compiler versions like gcc4.4 on RHEL (RedHat Enterprise Linux), or similar crusty old systems. If you care about obsolete gcc versions like that, you probably want to stick to __uint128_t. And maybe detect 64-bitness in terms of sizeof(void*) == 8 as a fallback for __SIZEOF_INT128__ no being defined. (I think GNU systems always have CHAR_BIT==8, although I might be wrong about some DSPs). That will give a false negative on ILP32 ABIs on 64-bit ISAs (like x86-64 Linux x32, or AArch64 ILP32), but this is already just a fallback / bonus for people using old compilers that don't define __SIZEOF_INT128__.

There might be some 64-bit ISAs where gcc doesn't define __int128, or maybe even some 32-bit ISAs where gcc does define __int128, but I'm not aware of any.


The GCC internals are integer TI mode (GCC internals manual). (Tetra-integer = 4x width of a 32-bit int, vs. DImode = double width vs. SImode = plain int.) As the GCC manual points out, __int128 is supported on targets that support a 128-bit integer mode (TImode).

// __uint128_t is pre-defined equivalently to this
typedef unsigned uint128 __attribute__ ((mode (TI)));

There is an OImode in the manual, oct-int = 32 bytes, but current GCC for x86-64 complains "unable to emulate 'OI'" if you attempt to use it.


Random fact: ICC19 and g++/clang++ -E -dM define:

#define __GLIBCXX_TYPE_INT_N_0 __int128
#define __GLIBCXX_BITSIZE_INT_N_0 128

@MarcGlisse commented that's the way you tell libstdc++ to handle extra integer types (overload abs, specialize type traits, etc)

icpc defines that even with -xc (to compile as C, not C++), while g++ -xc and clang++ -xc don't. But compiling with actual icc (e.g. select C instead of C++ in the Godbolt language dropdown) doesn't define this macro.


The test function was:

#include <stdint.h>   // for uint64_t

#define uint128_t __uint128_t
//#define uint128_t unsigned __int128

uint128_t mul64(uint64_t a, uint64_t b) {
    return (uint128_t)a * b;
}

compilers that support it all compile it efficiently, to

    mov       rax, rdi
    mul       rsi
    ret                  # return in RDX:RAX which mul uses implicitly
Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
  • 4
    `unsigned __int128` - full division is the *worst*. Frequently (especially with div / mod arithmetic routines) we know the quotient will fit in a 64-bit result, but the C run-time can't assume it. I wrote a reference 'modexp' (64-bit base, exponent, modulus), using `__int128` ... vs. a version using 64-bit intrinsics, [reciprocal division](https://gmplib.org/~tege/division-paper.pdf), etc., for an *18x* speed-up! 3x or 4x are respectable, but remember that there's always call overhead, and the [u]int128 functions can't make the algorithmic assertions that we can! – Brett Hale Jul 01 '19 at 17:12
  • 1
    @BrettHale: interesting. gcc's helper function maybe only checks that the upper half is zero instead of (for unsigned) checking that `divisor > high_half_dividend`. – Peter Cordes Jul 01 '19 at 20:51
  • 1
    Fast types aren't a good way to find out about an architecture's bitness. E.g., musl's `{,u}int_fast{16,32}_t`s on x86_64 are 32 bits, glibc's are 64 (also not good to include in an API for that matter). – Petr Skocik Jan 19 '20 at 08:33
  • 1
    @PSkocik: IDK why I even suggested that in the first place. I think I had been hoping to find something that would even work on ILP32 ABIs like x86-64 Linux's x32, or AArch64 ILP32, but that is not the case. Glad to hear MUSL makes it 32-bit on x86-64; that makes more sense to me. I hadn't realized it wasn't nailed down by the ABI and therefore not suitable for use in an API. – Peter Cordes Jan 19 '20 at 09:26
  • @PeterCordes It caught me by surprise too. I also think 32 bits is probably a better choice. IIRC from the last time I benchmarked this, 32 bit ops gave me pretty much the same numbers (better by a tiny amount) as 64 bit ops. – Petr Skocik Jan 19 '20 at 09:54
  • 1
    @PSkocik: 64-bit integers can sometimes save an instruction for sign-extension when used as array indices, but otherwise are strictly worse. Large code-size (REX prefixes), and much slower `div` on Intel CPUs (~2.5x). On AMD before Zen, 64-bit `mul`/`imul` is slower than 32-bit. Also 64-bit `popcnt` is slower on some CPUs. (All of these are compared to 32-bit, the default operand-size in x86-64 machine code, which zero-extends to 64-bit for free.) – Peter Cordes Jan 19 '20 at 10:23
  • About __GLIBCXX_TYPE_INT_N_0, it isn't just ICC that defines it, that's the way you tell libstdc++ to handle extra integer types (overload abs, specialize type traits, etc). – Marc Glisse Jan 25 '20 at 21:19
  • @MarcGlisse: Turns out passing `-xc` to ICC's C++ front-end doesn't make it preprocess as C or something. Thanks, updated. – Peter Cordes Jan 25 '20 at 21:36
34

Ah, big integers are not C's forte.

GCC does have an unsigned __int128/__int128 type, starting from version 4.something (not sure here). I do seem to recall, however, that there was a __int128_t def before that.

These are only available on 64-bit targets.

(Editor's note: this answer used to claim that gcc defined uint128_t and int128_t. None of the versions I tested on the Godbolt compiler explorer define those types without leading __, from gcc4.1 to 8.2 , or clang or ICC.)

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
salezica
  • 74,081
  • 25
  • 105
  • 166
  • 1
    `long long int` is 64 bits in every implementation I've used, including GCC for x86-64. And I believe that GCC's 128-bit int is only available on 64-bit platforms. – interjay Apr 18 '13 at 16:34
  • I just tried it in 2 systems, and they stand by your results. I've removed the assertion of it going up to 128 bits long. – salezica Apr 18 '13 at 16:52
  • 1
    gcc 4.7.2 on Linux x86_64 doesn't have `[]int128_t`. I suppose it's possible gcc 4.8.0 might have it. – Keith Thompson Apr 18 '13 at 17:51
  • 4
    Try `typedef int really_long __attribute__ ((mode (TI)));`. It has worked for a long time (on architectures with native 64-bit). – Pascal Cuoq Apr 18 '13 at 20:39
  • 2
    `gcc-4.1 -m64` and above support `__uint128_t` out-of-the-box, and they also support the following typedef: `typedef unsigned uint128_t __attribute__ ((mode (TI)));`. – pts Jan 01 '14 at 21:54
  • GCC had 128-bit int since at least 4.1.2 [Is there any way to do 128 bit ints on gcc <4.4](https://stackoverflow.com/q/5576217/995714#comment90833163_5576526) – phuclv Feb 21 '19 at 15:43
  • @Peter Cordes Your edit nicely improved this answer. – chux - Reinstate Monica Feb 22 '19 at 05:26
  • Apologies for going somewhat off-topic. Is there some way that we could optionally make use of 128-bit vector registers in eg x86, AMD64 ? I realise that arithmetic could either be awkward or would require temporary copying to/from ordinary integer registers. But the advantages would be getting loads/stores to/from memory to use the native 128-bit instructions, use the full memory bandwidth, and also making use of the vector registers reduces general register pressure so improving unrelated code that would otherwise need spills. Compiler backend needs to evaluate potential gains vs losses. – Cecil Ward Apr 05 '23 at 01:55
  • I should perhaps have placed my question elsewhere, somewhere more suitable, not sure where. – Cecil Ward Apr 05 '23 at 01:56
17

You could use a library which handles arbitrary or large precision values, such as the GNU MP Bignum Library.

Reed Copsey
  • 554,122
  • 78
  • 1,158
  • 1,373
  • Reed gave a perfectly valid answer for non-64-bit machines. This question is the number one response to a search for "C int128", so having a general answer for any platform is a good thing. Maybe next time if you feel so strong about this subject write an answer instead of tearing someone else down (that way you can reap the rep benefits as well). – CrazyCasta Oct 29 '20 at 05:35
  • 2
    @CrazyCasta having an arbitrary-precision library just for a 128-bit type is just too wasteful, and the overhead is just too big. A fixed-width library like [Boost.Multiprecision or calccrypto/uint128_t](https://stackoverflow.com/a/28117872/995714) will be much smaller and faster – phuclv Jan 12 '21 at 09:24