Computer hardware does not calculate real numbers the way we do. Neither way is "correct" because some numbers require infinite expansions and we don't have time to deal with infinitely many digits. For example when we calculate 1/3 we get 0.3333.... We are used to the inaccuracy of truncating this expansion and think of it as "correct". Of course it is not exactly correct.
Computer hardware does not use base 10 calculations the way we do. It uses a form of base 2 calculation. The infinite expansions are different in base 2. For example 1/10 (base 10) is 0.000110001100011.... (base 2).
Both systems have inaccuracies because we must truncate infinite decimal expansions. The inaccuracies are different. Computer hardware compensates for the more frequent infinite expansions in base 2 by keeping lots of fractional digits.
The takeaway is that you will always get situations in which your answers are not what you expect.
If your application cannot accept these inaccuracies (and some accounting applications cannot) then you need to emulate base 10 arithmetic in software. There are packages for this. I have no experience with them. To get you started here is one:
https://github.com/MikeMcl/bignumber.js/