The reason is that the decimal type is not normalized. There are multiple representations for the same number, and those will be represented as different strings.
This is not a special property of your database type, this is how decimal works normally. There is no special DataAnotation or anything attached to the variable.
(1m).ToString() == "1"
(1.00000000m).ToString() == "1.00000000"
((1m)==(1.00000000m)) == true
For a given double, there is only one valid representation, i.e. one combination of mantissa * 2exponent
For decimal, there are multiple valid representations of mantissa * 10exponent. Each represents the same number, but the additional information available through the multiple possible representations is used to select the default number of trailing digits when converting the decimal to a string. The exact details are not really well-documented, and I found no information on what exactly happens with the exponent when decimals are added or multiplied. But the effect it has on ToString() is easy to verify.
The drawback is that the Equals() and GetHashCode() operations are more complicated than for a normalized number format, and there have been subtle bugs in the implementation: C# Why can equal decimals produce unequal hash values?
This article by Jon Skeet goes into a bit more detail:
A decimal is stored in 128 bits, even though only 102 are strictly
necessary. It is convenient to consider the decimal as three 32-bit
integers representing the mantissa, and then one integer representing
the sign and exponent. The top bit of the last integer is the sign bit
(in the normal way, with the bit being set (1) for negative numbers)
and bits 16-23 (the low bits of the high 16-bit word) contain the
exponent. The other bits must all be clear (0). This representation is
the one given by decimal.GetBits(decimal) which returns an array of 4
ints.
[...]
The decimal type doesn't normalize itself - it remembers how many decimal digits it has (by maintaining the exponent where possible) and on formatting, zero may be counted as a significant decimal digit.
You can verify that the two decimals you have are not identical by comparing the values returned by decimal.GetBits(), i.e.:
decimal.GetBits(1m) == {int[4]}
[0]: 1
[1]: 0
[2]: 0
[3]: 0
decimal.GetBits(1.00000000m) == {int[4]}
[0]: 100000000
[1]: 0
[2]: 0
[3]: 524288
It may be tempting to rely on this behaviour for formatting your decimals, but I would recommend always explicitly selecting a precision when converting to string, to avoid confusion and unforeseen surprises for example if the number is multiplied by a certain factor beforehand.