Round to two decimal places: multiply by 100, convert to integer, divide by 100.0 (although note that you can't say in general a floating point number, in its native representation, has exactly two base-ten digits after the decimal point; those need not be round numbers in native representation.)
For that reason - I would actually argue that multiplying by 100, and storing as an integer with the understanding that this represents 100ths of a unit, is a more accurate way to represent a "number accurate to two decimal places".