4

I've been using types from cstdint (such as uint32_t) in my code regularly, but now they don't quite fit my needs, particularly with regards to templates.

Is there a way to specify an integer type that is twice the size of a template argument? When my template is passed a uint32_t, I need it to create a uint64_t for one variable within the function. Perhaps more difficultly, when passed a uint64_t, I need it to create a 'uint128_t'. I could do this with an array of two of the template arguments, but then I can't pass that array to other template functions. This is a performance critical section of code (I'm doing cryptography).

Related to that, is there some other header I can include (in order of preference: standard, boost, other) that gives me 128-bit integers? Looks like this question answers this particular part: Fastest 128 bit integer library

Is there a way to specify that I want to use the largest integer available that is not greater than a specific size? This maximum size would also be a function of sizeof (T).

Community
  • 1
  • 1
David Stone
  • 26,872
  • 14
  • 68
  • 84
  • This has been discussed before, search this site and you might find something useful. – Kerrek SB Nov 30 '11 at 18:21
  • what do you need 128-bit integers for? – Cheers and hth. - Alf Nov 30 '11 at 18:21
  • Related: http://stackoverflow.com/questions/1188939/representing-128-bit-numbers-in-c – John Dibling Nov 30 '11 at 18:27
  • 1
    I tried searching, but perhaps I don't know what the proper term to search for is. I'm aware of things like uintmax_t, which comes close to answering my last question, but it is important that the type is not larger than a specified size (if possible). I tried searching for phrases along the lines of "integer type twice the size of another", but to no avail. I'm aware of some "big int" libraries, but my understanding is that most of them are optimized for very large ints of arbitrary precision, I was hoping for some pointers to a fast implementation of 128-bit specifically. – David Stone Nov 30 '11 at 18:29
  • Admittedly, the 128-bit integer part is the smallest part of my question. If that proves too distracting, I could edit that out. I only mentioned it because it's a pre-requisite for answering question 1 about relative-width integer types. – David Stone Nov 30 '11 at 18:30

1 Answers1

5

"Extended arithmetic" is a shortcoming of the C family of languages. There is no way to obtain the processor's integer overflow flag, so there is no portable way to write an optimal 128-bit integer class.

For best performance (to compete with other crypto libraries), you might need a static library with custom assembly inside. Unfortunately, I don't know of a portable (widely-ported) interface to such.

If you just want a map from each fundamental type with N bits to that with 2N bits, then make a simple metafunction:

template< typename half >
struct double_bits;

template<>
struct double_bits< std::uint8_t >
    { typedef std::uint16_t type; };

template<>
struct double_bits< std::uint16_t >
    { typedef std::uint32_t type; };

template<>
struct double_bits< std::uint32_t >
    { typedef std::uint64_t type; };
Potatoswatter
  • 134,909
  • 25
  • 265
  • 421
  • +1 for template class. My mind was stuck in template functions and I was trying to find out how to get a function to return a type instead of a value. Searching for that only gives me stuff about the return type of a function instead of returning a type from a function. – David Stone Nov 30 '11 at 18:42
  • @David : Specifically, these sorts of classes are called _metafunctions_. Searching for that term will yield some good reading material. :-] – ildjarn Nov 30 '11 at 19:18
  • I have something similar for float -> double -> long double that I use for enhancing precision in floating point numerics. I find it useful. – emsr Dec 01 '11 at 14:58