Given a class, Decimal
, what is the difference between the expressions (Decimal)x
and Decimal(x)
?
Additionally information, in case it matters : x
is an object of type MLBigNumVar
, another user-defined class.
Given a class, Decimal
, what is the difference between the expressions (Decimal)x
and Decimal(x)
?
Additionally information, in case it matters : x
is an object of type MLBigNumVar
, another user-defined class.
Assuming Decimal
is a type name, and x
is a value, then both are equivalent. They convert the value of x
to the type Decimal
. The first uses cast notation and the second uses functional notation, both of which have the same meaning.