Basic math (128 / 8 = 16) speaks differently. I'm kinda disappointed and want some answers - since from what I've been used to, that notation(type_num_of_bytes_t) describes not just the amount of data you can put into the variable, but also cross-platform fixed variable size, and the latter is IMHO even more important. What am I doing wrong?
#include "boost/multiprecision/cpp_int.hpp"
using boost::multiprecision::uint128_t;
...
qDebug() << sizeof(uint128_t);
Output: 24.
I'm using standard x86/64 architecture CPU, compiling with vs2013 on Windows.
UPDATE: boost version is 1.61.