17

I have an integer literal in the format 0x75f17d6b3588f843b13dea7c9c324e51. Is there a way to avoid the compiler syntax error "integer literal is too large to be represented in any integer type"?

Because I know I can work with those kinds of types (I'm using uint128_t from the EOS library and if I manually insert it, it works).

Is there a way to somehow parse this string directly into the exact same integer at run time?

phuclv
  • 37,963
  • 15
  • 156
  • 475
Bida
  • 439
  • 1
  • 4
  • 13

2 Answers2

19

You may write a user defined literal (since C++11) for 128-bit integers.

A raw literal operator takes a single const char* as parameter. You can than write a function body to parse the string.

For example:

// Use __uint128_t for demonstration.
constexpr __uint128_t operator""_uint128_t(const char* x)
{
    __uint128_t y = 0;
    for (int i = 2; x[i] != '\0'; ++i)
    {
        y *= 16ull;
        if ('0' <= x[i] && x[i] <= '9')
            y += x[i] - '0';
        else if ('A' <= x[i] && x[i] <= 'F')
            y += x[i] - 'A' + 10;
        else if ('a' <= x[i] && x[i] <= 'f')
            y += x[i] - 'a' + 10;
    }
    return y;
}

Obviously, this implementation is problematic because I'm too lazy to develop a full solution, it only support hexadecimal, it does not check the 0x prefix, etc. And it requires the C++14 relaxed constexpr function. But it demonstrates that you can actually parse this string directly into the exact same integer.

Let's test it out:

int main()
{
    auto abc = 0x1234567890ABCDEFfedcba0987654321_uint128_t;
    std::uint64_t higher = abc >> 64;
    std::uint64_t lower = abc;
    std::cout << std::hex << higher << ' ' << lower;
}

http://coliru.stacked-crooked.com/a/fec4fc0fd4ff1418

Alexis Wilke
  • 19,179
  • 10
  • 84
  • 156
9

128 bit integer literals are not mandated by the standard, so it's up to the implementation if it wants to allow them. Most don't, so you'll need to break it up into two 64-bit components and use bitwise operators to combine them:

__uint128_t num = ((__uint128_t)0x75f17d6b3588f843 << 64) | 0xb13dea7c9c324e51;

A good compiler should perform the operations at compile time.

dbush
  • 205,898
  • 23
  • 218
  • 273
  • Hi, this looks cool, but result is not the same as original, it gets 0x514e329c7cea3db143f888356b7df175 – Bida Jul 26 '18 at 13:07
  • 1
    @Bida How exactly are you getting this value? If you're running on a little-endian machine like x86/x64, the bytes in integer types are stored with the least significant byte first. If you print the bytes in order, you'll see that reflected in your output. – dbush Jul 26 '18 at 13:10
  • The Standard has no rules against 128 bit integer literals. As far as the Standard is concerned, implementations may even give them type `int`. However, the standard-mandated minimum is only 64 bits, and that's for `long long`. – MSalters Jul 26 '18 at 13:11
  • @dbush is is an little endian for eos, sorry you're right – Bida Jul 26 '18 at 13:15