I'm looking for a way to force the computer to calculate a floating-point operation with a set number of significant digits. This is for pure learning reasons, so I don't care about the loss of accuracy in the result.
For example, if I have:
float a = 1.67;
float b = 10.0;
float c = 0.01
float d = a * b + c;
And I want every number represented with 3 significant digits, I'd like to see:
d = 16.7;
Not:
d = 16.71;
So far, I got this as a possible answer: Limit floating point precision?
But it would bloat my code to turn every floating-point variable into one with the precision I want using that strategy. And then doing to the same with the result.
Is there an automatic way to fix the precision?