22

Which is the biggest integer datatype in c++?

Brent Bradburn
  • 51,587
  • 17
  • 154
  • 173

9 Answers9

23

The biggest standard C++ integer type is long.

C has a long long, and C++0x is going to add that as well, and of course you could implement your own custom integer type, perhaps even a BigInt class.

But technically speaking, considering the built-in integer types, long is your answer.

jalf
  • 243,077
  • 51
  • 345
  • 550
  • 1
    Technically, the next C++ standard may not have the long long type. There is nothing that can't be pulled from the standard even at this late stage. But +1 for being a (correct) pedant :-) – paxdiablo Sep 13 '09 at 15:30
  • 3
    and you called *me* a pedant. ;) But yeah, fair point. At the moment, it looks like it's going to get included in C++0x, but who knows. :) – jalf Sep 13 '09 at 18:48
  • 14
    Now we know—`long long` is in C++11. – Craig McQueen Nov 06 '13 at 00:18
13

The long long data-type is the largest built-in integral datatypes in standard C99 and C++0x. Just as with all of the other integral data types, long long is not given an exact size in bytes. Instead, it is defined to be at least a 64-bit integer. While long long is not part of the official C++ standard, it is ubiquitously supported across modern compilers. Note, however, that many compilers for modern desktops define long and long long as both exactly 64-bits, whereas many compilers for embedded processors define long as 32-bits and long long as 64-bits (obviously witha few exceptions).

If you need more precision or absolutely cannot use the long long language extension, you're going to have to use one of the C or C++ libraries that are designed to work with extremely large or small numbers.

Michael Koval
  • 8,207
  • 5
  • 42
  • 53
  • 3
    "...as long long and long will both be 64-bits on most modern desktop CPUs." Not true. Depends on the compiler. In LLP64 model (which is used by VC), `long` remains 32-bit and `long long` is 64-bit. – Mehrdad Afshari Sep 13 '09 at 07:54
  • 1
    See this link http://en.wikipedia.org/wiki/LLP64#Specific_data_models for the possible 64 bit memory models. – Andy J Buchanan Sep 13 '09 at 07:55
  • 10
    FYI, `long long` is not in the C++ standard. It's added to C in C99. Currently, it's a ubiquitous extension supported by most compilers. – Mehrdad Afshari Sep 13 '09 at 07:57
  • 2
    long long is part of C++0x (or should I say C++1x, now?), see http://www.research.att.com/~bs/C++0xFAQ.html#long-long – stephan Sep 13 '09 at 08:13
  • 1
    Please advise where in the standard it says long longs are at least 64 bits. The only part of the standard I've seen that mentions this states that each of the integral types is at least as big as the next smaller one. That means a conforming implementation can give 8bits for char, int, long and long long. – paxdiablo Sep 13 '09 at 08:20
  • 2
    Pax, defines macros INT_MAX et al. The standard defines minimal values for these, effectively establishing minimum number of bits required for the representation of the built-in integral types. It's 16 for int, 32 for long and (int the new standard) 64 for long long. – avakar Sep 13 '09 at 08:43
  • 2
    Where in the standard does it say that `long long` **exists**? *It does not even exist in C++ yet*. It is a C datatype, and yes, it will be added in C++0x, but at the moment, it does not exist in C++. -1 – jalf Sep 13 '09 at 09:00
  • 1
    Jalf, is there a C++ compiler that does not support long long? – Crashworks Sep 13 '09 at 10:19
  • 1
    Sure there is, Comeau in strict mode. :) – avakar Sep 13 '09 at 10:45
  • 1
    @Crashworks: The question was "What is the biggest integer datatype in C++". It was *not* "What is the biggest datatype accepted by this or that compiler". I think most compilers will disallow it if you compile in strict mode, not just Comeau. – jalf Sep 13 '09 at 11:09
  • 1
    Also note that C99 and C++0x support/will support extended integer types. So it can happen that `size_t` is bigger than unsigned long long. – Johannes Schaub - litb Sep 13 '09 at 14:00
  • 1
    Anyway i'm sorry you get a -1. You may aswell say "the biggest datatype is `long long long long` in c++.", which aswell isn't supported by it either :) – Johannes Schaub - litb Sep 13 '09 at 14:01
9

You might prefer to avoid worrying about primitive names by mapping to the largest (fully realised) type on the compiling architecture via <cstdint> and its typedefs intmax_t and uintmax_t.

I was surprised no one else said this, but cursory research indicates it was added in C++11, which can probably explain the lack of previous mentions. (Although its fellow new primitive/built-in type long long was cited!)

Some compilers may also provide larger types, though these may come with caveats, for instance: Why in g++ std::intmax_t is not a __int128_t?

Personally, I'm using cstdint as it's easier to quickly see how many bytes minimum I'm using - rather than having to remember how many bits a given primitive corresponds to - and the standard means it avoids my types being platform-dependent. Plus, for what I do, uint8_t is faster and neater than endless unsigned chars!

edit: In hindsight, I want to clarify: uint8_t is not guaranteed to be equivalent to unsigned char. Sure, it is on my machines, and it probably is for you, too. But this equivalance is not required by the Standard; see: When is uint8_t ≠ unsigned char? For that reason, now when I need the Standard-defined special abilities of [[un]signed] char, I use only that.

Community
  • 1
  • 1
underscore_d
  • 6,309
  • 3
  • 38
  • 64
  • downvoter: Is there a real problem that you can tell the rest of us about, or are you just feeling bitter about something unrelated? – underscore_d Oct 26 '17 at 12:39
4

There are 128 bit packed integer and floating point formats defined in xmmintrin.h on compilers that support SSE to enable use of the SSE registers and instructions. They are not part of the C++ standard of course, but since they are supported by MSVC, GCC and the Intel C++ Compiler there is a degree of cross-platform support (at least OS X, Linux and Windows for Intel CPUs). Other ISAs have SIMD extensions so there are probably other platform/compiler specific extensions that support 128 or 256 bit SIMD instructions. Intel's upcoming AVX instruction set will have 256 bit registers, so we should see a new set of data types and intrinsics for that.

They don't behave quite like built-in data types (i.e., you have to use intrinsic functions instead of operators to manipulate them, and they work with SIMD operations) but since they do in fact map to 128-bit registers on the hardware they deserve mention.

Details about Streaming SIMD Extension (SSE)Intrinsics

Whatever
  • 445
  • 3
  • 4
3

boost::multiprecision::cpp_int is an arbitrary precision integer type. So there is no "biggest integer datatype" in C++. Just a biggest builtin integral type, which AFAIK is long in standard C++.

cdonat
  • 2,748
  • 16
  • 24
2

You can easily get bigger datatype by defining your own Class. You can get inspired from the class BigInteger in Java. It's a nice one but it's not necessarily an actual integer even if it acts exactly like one.

Omar Al-Ithawi
  • 4,988
  • 5
  • 36
  • 47
0

In the Borland and Microsoft compilers, __int64 is probably the largest you can get.

David Andres
  • 31,351
  • 7
  • 46
  • 36
0

The __int128_t and __uint128_t (unsigned __int128_t) data types are 128 bits long, double the size of a long long (which is 64 bits long for those new to c++). However, if you are going to use them, then you need to do some overloading because the int128 data type is not too deeply supported (in Mingw, at least). This is an example of how you can use it to show 2^x-1 up until 2^128-1

#include <iostream>

char base10_lookup_table[10]={'0','1','2','3','4','5','6','7','8','9'};

std::ostream&
operator<<( std::ostream& dest, __int128 value )
{
    std::ostream::sentry s( dest );
    if ( s ) {
        __uint128_t tmp = value < 0 ? -value : value;
        char buffer[ 128 ];
        char* d = std::end( buffer );
        do
        {
            -- d;
            *d = base10_lookup_table[ tmp % 10 ];
            tmp /= 10;
        } while ( tmp != 0 );
        if ( value < 0 ) {
            -- d;
            *d = '-';
        }
        int len = std::end( buffer ) - d;
        if ( dest.rdbuf()->sputn( d, len ) != len ) {
            dest.setstate( std::ios_base::badbit );
        }
    }
    return dest;
}

std::ostream&
operator<<( std::ostream& dest, unsigned __int128 value )
{
    std::ostream::sentry s( dest );
    if ( s ) {
        __uint128_t tmp = value < 0 ? -value : value;
        char buffer[ 128 ];
        char* d = std::end( buffer );
        do
        {
            -- d;
            *d = base10_lookup_table[ tmp % 10 ];
            tmp /= 10;
        } while ( tmp != 0 );
        if ( value < 0 ) {
            -- d;
            *d = '-';
        }
        int len = std::end( buffer ) - d;
        if ( dest.rdbuf()->sputn( d, len ) != len ) {
            dest.setstate( std::ios_base::badbit );
        }
    }
    return dest;
}



int main ( void )
{
    __uint128_t big_value = 0;      //unsigned int128

    for ( unsigned char i=0; i!=129; ++i )   //I am using an unsigned char because it can hold all the values that will be used
    {
        std::cout << "1 less than 2 to the power of " << int(i) << " = \0" << big_value << "\n";
        big_value |= (__uint128_t)1 << i;
    }

    return 0;    //formal way of exiting
}

The problem with the int128 datatype is that not all compilers may support it.

Jack G
  • 4,553
  • 2
  • 41
  • 50
0

Repl.it says that unsigned long long int is actually a valid type, and I find that very interesting.