Depending on what financial transactions you're performing, rounding errors are likely to be inevitable. If an item costs $1.50 with 7% sales tax, you aren't going to be charged $1.605; the price you pay will be either $1.60 or $1.61. (US currency units theoretically include "mils", or thousandths of a dollar, but the smallest denomination coin is $0.01, and almost all transactions are rounded to the nearest cent.)
If you're doing simple calculations (just adding and subtracting quantities and multiplying them by integers), all the results will be whole numbers of cents. If you use binary floating-point representing the number of dollars, most amounts will not be representable; a calculation that should yield $0.01 might yield $0.01000000000000000020816681711721685132943093776702880859375.
You can avoid that problem by using integers to represent the number of cents (or, equivalently, using fixed-point if the language supports it) or by using decimal floating-point that can represent 0.01 exactly.
But for more complex operations, like computing 7% sales tax, dividing a sum of money into 3 equal parts, or especially compound interest, there are still going to be results that aren't exactly representable unless you use an arbitrary-precision package like GMP.
As I understand it, there are laws and regulations that specify exactly how rounding errors are to be resolved. If you apply 7% sales tax to $1.50, you can't legally pick between $1.60 and $1.61; the law tells you exactly which one is legally correct.
If you're writing financial software to be used by other people, you need to find out exactly what the regulations say. Once you know that, you can determine what representation (integers, fixed-point, decimal floating-point, or whatever) can best be used to get the legally required results.
(Disclaimer: I do not know what these regulations actually say.)