1

What prevents C++ standard from having a 128/256 bit integer?

From other stackoverflow questions, recommendation to achieve this are Boost or compiler extension __int128 or std::bitset<>

So it is obvious that programmers are using/needing this. Why is there a reluctance in adopting it?

Sreeraj Chundayil
  • 5,548
  • 3
  • 29
  • 68
  • 1
    just because something exists in boost is not a proof that it is needed... – Nidhoegger Aug 18 '21 at 19:15
  • 3
    There is definitely a need for it, but not for very many programs. And support for bigger int sizes exists, for example, clang's [`_ExtInt`](https://blog.llvm.org/2020/04/the-new-clang-extint-feature-provides.html) supports integers with bitwidth of 1 all the way to a whopping 16,777,215. Even though there is no `std::uint16777215_t`. Alas. Sad panda. – Eljay Aug 18 '21 at 19:18
  • 1
    "'What prevents C++ standard from having a 128/256 bit integer?" --> Nothing prevents it other than history and lack of necessity. Perhaps in the future. – chux - Reinstate Monica Aug 18 '21 at 19:22
  • @DrewDormann: In our programs we started encountering 128 bits expectation from customer side. – Sreeraj Chundayil Aug 18 '21 at 19:23
  • ***What prevents C++ standard from having a 128/256 bit integer?*** I have the same question for CPUs. Related: [https://stackoverflow.com/questions/34234407/is-there-hardware-support-for-128bit-integers-in-modern-processors](https://stackoverflow.com/questions/34234407/is-there-hardware-support-for-128bit-integers-in-modern-processors) – drescherjm Aug 18 '21 at 19:27
  • Perhaps rather than has an updated C++ every few years with `(u)int128` then later `(u)int256` then later `(u)int512`, a `_BitInt(N), UnsignedBitInt(N)` will arrive like planned for [C](https://en.wikipedia.org/wiki/C2x)? As I see it, what _prevents_ 128/256 bit integer is an endless spec expansion, where a generic solution is needed. Good luck - hope you get a definitive answer. – chux - Reinstate Monica Aug 18 '21 at 19:33
  • 1
    There has to be a cutoff at *some* point at the language level. Eventually, you might as well just have a `APInt` in the standard library instead and let compilers specialize it for sizes where hardware support is available. –  Aug 18 '21 at 19:42

1 Answers1

3

The reason is expense and lack of need. If the standard required a 128-bit integer type, every compiler would have to implement it. On hardware that doesn't support such an integer type natively, implementations would have to provide a way of generating code to emulate it. There simply aren't enough folks who need such a type to justify imposing it on every compiler.

Pete Becker
  • 74,985
  • 8
  • 76
  • 165
  • With my very little knowledge in instruction set, why can't we have two 64bit registers to achieve this(if no 128 bit register)? What's the difficulty in it? – Sreeraj Chundayil Aug 18 '21 at 19:46
  • 2
    The de-facto "floor" of C++ in terms of type bitness is 16bit... which reasonably speaking means that is also the de-facto floor for hardware C++ supports. Now imagine implementing 128bit multiply on 16 bit hardware. – Mgetz Aug 18 '21 at 19:46
  • @Mgetz: If your statement is true it makes sense :) – Sreeraj Chundayil Aug 18 '21 at 19:47
  • @InQusitive in theory you can implement C++ for the 6502... but is it worth it probably not. But C++ very much exists in 16bit hardware. – Mgetz Aug 18 '21 at 19:49
  • @SergeyA debatable; the standard DOES require `int_least64_t` so requiring an equivalent `int_least128_t` would have negative impacts and that is what the OP's question is. Obviously implementations are free to provide their own fixed width types and some do. But saying `uint64_t` is optional hides the fact that the implementation has to support 64bit wide math if not wider. – Mgetz Aug 18 '21 at 19:57
  • @Mgetz OP question as asked and the answer as given does not talk about `least` types. Again, nothing prevents standard from introducing exact width types for 128/256 bits without giving mandatory `least` versions. – SergeyA Aug 18 '21 at 20:07
  • @SergeyA -- the various `_t` types are a completely separate issue. They are only required to exist if the compiler supports the underlying type (and that's why the original "duplicate" was not a duplicate). The question is about **requiring** support for 128-bit integer types. Compilers today are already allowed to provide 128-bit integer types, and some do. – Pete Becker Aug 18 '21 at 20:08
  • 1
    @InQusitive -- 128-bit integers are certainly **implementable** on pretty much any sane hardware. That's not the issue. The question is whether it's worthwhile to **require** every compiler implementor to put in the time and effort needed to implement and test it. Even on 8-bit processors. – Pete Becker Aug 18 '21 at 20:12
  • @PeteBecker I do not see anything in the question about "requiring". – SergeyA Aug 18 '21 at 21:02
  • @Mgetz • 128bit multiply on 16 bit hardware is not very hard. But it is very inefficient compared to native (FPU) multiplies (one-or-two orders of magnitude slower). 128bit divide on 16 bit hardware is quite a bit harder, and quite a bit more inefficient. Both are good reasons to either leave it out of the standard, or to have compilers decide to opt-in to support them... which they can already do in non-standard ways anyway if they so choose. (I'm agreeing with your statement, and expanding upon it.) – Eljay Aug 18 '21 at 21:38
  • @SergeyA -- how would you translate "having a 128/256 bit integer" into standardese? Both are currently allowed. The sizes of standard integer types are imposed through two requirements: each integer type has a minimum range that must be supported, and each integer type has to be at least as large as its predecessor in the list of integer types (`char`, `short`, `int`, `long`, `long long`). What do you think the standard could say about 128/256 bit integers beyond what it currently does without imposing a requirement? – Pete Becker Aug 18 '21 at 22:38
  • @SergeyA -- my answer, talking about requirements, has been accepted. Obviously that's what the question was about. – Pete Becker Aug 18 '21 at 22:44