4

Basic math (128 / 8 = 16) speaks differently. I'm kinda disappointed and want some answers - since from what I've been used to, that notation(type_num_of_bytes_t) describes not just the amount of data you can put into the variable, but also cross-platform fixed variable size, and the latter is IMHO even more important. What am I doing wrong?

#include "boost/multiprecision/cpp_int.hpp"
using boost::multiprecision::uint128_t;

...

qDebug() << sizeof(uint128_t);

Output: 24.

I'm using standard x86/64 architecture CPU, compiling with vs2013 on Windows.

UPDATE: boost version is 1.61.

Baum mit Augen
  • 49,044
  • 25
  • 144
  • 182
Leontyev Georgiy
  • 1,295
  • 11
  • 24
  • Heh, thanks for edits, seems like I've gone too emotional writing this =D – Leontyev Georgiy Jan 26 '17 at 15:02
  • 3
    [Cannot reproduce](http://melpon.org/wandbox/permlink/oSB8GpK75qUDH6D6), and [ABI says](https://software.intel.com/sites/default/files/article/402129/mpx-linux64-abi.pdf) it should be 16 too. Also tried in on my 64 bit Linux, still getting 16. Please add enough information about your platform to make the issue reproducable. – Baum mit Augen Jan 26 '17 at 15:03
  • Well, same exact code when ran using vs2013's compiler shows 24. Checked that out. – Leontyev Georgiy Jan 26 '17 at 15:07
  • @BaummitAugen, maybe `boost::multiprecision::uint128_t` simply uses `unsigned __int128` when the compiler supports that (as GCC and Clang do on x86_64). – Jonathan Wakely Jan 26 '17 at 15:15
  • @JonathanWakely Apparently so, yes. Well, I edited the necessary information in, so all is good now I guess. – Baum mit Augen Jan 26 '17 at 15:16
  • @JonathanWakely if this is true, then it makes boost even dangerous to use when aiming for cross-platform development. What we've discovered here shouldn't occur between different platforms. – Leontyev Georgiy Jan 26 '17 at 15:17
  • 2
    @LeontyevGeorgiy How so? Windows and Linux are not ABI compatible anyways, so why do you care if `sizeof` some types is different between them? – Baum mit Augen Jan 26 '17 at 15:24
  • @BaummitAugen I was going to send the results of computation over the network to Linux client. Imagine  the frustration it would produce if I wouldn't work this out now. – Leontyev Georgiy Jan 26 '17 at 15:39
  • @BaummitAugen what I mean is that they should have used their own implementation on both platforms if the underlying memory layout is different. – Leontyev Georgiy Jan 26 '17 at 15:49
  • 2
    @LeontyevGeorgiy Just sending the direct in-memory representation between two machines would seem like a pretty bad idea anyway. From docs I see there's support for [boost serialization](http://www.boost.org/doc/libs/1_63_0/libs/multiprecision/doc/html/boost_multiprecision/tut/serial.html), as well as [import/export](http://www.boost.org/doc/libs/1_63_0/libs/multiprecision/doc/html/boost_multiprecision/tut/import_export.html). – Dan Mašek Jan 26 '17 at 15:53
  • 2
    ^ That, unless you know exactly what you are doing. People wanting to send stuff in between two machines is not a good reason to skip on the advantages of the built-in `__int128`. – Baum mit Augen Jan 26 '17 at 16:02
  • 2
    Yes, I completely agree with the previous two comments. _"What we've discovered here shouldn't occur between different platforms."_ is total nonsense. Are you also concerned that `std::string` and `std::list` can have different sizes on different platforms? If you're doing cross-platform development then you need to handle such differences, not label them "dangerous" because you're not doing cross-platform development correctly. – Jonathan Wakely Jan 26 '17 at 16:11

1 Answers1

11

cpp_int 1.6.1

When used at fixed precision, the size of this type is always one machine word larger than you would expect for an N-bit integer: the extra word stores both the sign, and how many machine words in the integer are actually in use. The latter is an optimisation for larger fixed precision integers, so that a 1024-bit integer has almost the same performance characteristics as a 128-bit integer, rather than being 4 times slower for addition and 16 times slower for multiplication (assuming the values involved would always fit in 128 bits). Typically this means you can use an integer type wide enough for the "worst case scenario" with only minor performance degradation even if most of the time the arithmetic could in fact be done with a narrower type.

The extra machine word (on x86/64 8 bytes) makes the size 24 instead of the expected 16.

lcs
  • 4,227
  • 17
  • 36
  • Thanks! Doesn't make much sense though... They should have put it as a comment inside the cpp_int.hpp. – Leontyev Georgiy Jan 26 '17 at 15:09
  • I edited my answer to include the full statement from the documentation, appears to be an optimization for very large integers. – lcs Jan 26 '17 at 15:10