There many incorrect assumptions in your question
First, there's no requirement regarding types' sizes in C++. The standard only mandates a minimum precision of each type and that...
... The type double
provides at least as much precision as float
, and the type long double
provides at least as much precision as double
. The set of values of the type float
is a subset of the set of values of the type double
; the set of values of the type double
is a subset of the set of values of the type long double
. The value representation of floating-point types is implementation-defined.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3337.pdf
Most modern implementations map float
and double
to IEEE-754 single and double-precision formats though, as hardware support for them is mainstream. However long double
don't have such wide support, because few people need higher precision than double, and hardware for those cost a lot more. Therefore some platforms map it to IEEE-754 double-precision, i.e. the same as double
. Some others map it to the 80-bit IEEE 754 extended-precision format if the underlying hardware supports it. Otherwise long double
will be represented by double-double
arithmetic or IEEE-754 quadruple-precision
Moreover precision also doesn't scale linearly to the number of bits in the type. It's easy to see that double
is more than twice as precise as float
and 8 times wider range than float
despite only twice the storage, because it has 53 bits of significand compared to 24 in float and 3 more exponent bits. Types can also have trap representations or padding bits so different types may have different ranges even though they have the same size and belong to the same category (integral or floating-point)
So the important thing here is std::numeric_limits<long double>::digits
. If you print that you'll see that long double
has 64 bits of significand which is just 11 bits more than double
. See it live. That means your compiler uses the 80-bit extended-precision for long double
, the rest is just padding bytes to keep the alignment. In fact gcc has various options that will change your output:
-malign-double
and -mno-align-double
for controlling the alignment of long double
-m96bit-long-double
and -m128bit-long-double
for changing the padding size
-mlong-double-64
, -mlong-double-80
and -mlong-double-128
for controlling the underlying long double
implementation
By changing the options you'll get the below results for long double
You'll get size = 10 if you disable padding, but that'll come at a performance expense due to misalignment. See more demo on compiler explorer
In PowerPC you can also see the same phenomena when changing the floating-point format. With -mabi=ibmlongdouble
(double-double arithmetic, which is the default) you'll get (size, digits10, digits2) = (16, 31, 106) but with -mabi=ieeelongdouble
the tuple will become (16, 33, 113)
For more information you should read https://en.wikipedia.org/wiki/Long_double
And I also want to know how can I get better precision, without defining my own data type
The keyword to search is arbitrary-precision arithmetic. There are various libraries for that which you can find in the List of arbitrary-precision arithmetic software. You can find more information in the tags bigint, biginteger or arbitrary-precision