Below is a test block of code that is supposed to determine equivalency of 2 doubles within a tolerance.
double lhs_1 = 0.02;
double lhs_2 = 0.04;
double rhs = 0.03;
double tolerance = 0.01;
bool is_match_1 = (abs(lhs_1 - rhs) <= tolerance);
bool is_match_2 = (abs(lhs_2 - rhs) <= tolerance);
However is_match_2 turns out false where is_match_1 turns out true. I understand that numbers stored in the computer are discreet values and not continuous. Can someone share a solution? I would like to err on the side of passing the test within reason. Is there a way to increment the value of a double by 1 for whatever precision it currently has (I am not familiar with the bit layout of a double)? Because I might just increment the tolerance value to allow for this granularity issue.
EDIT:
When this is really implemented the user will define inputs and tolerances, so I'm just trying to give them an expected output for whatever values they enter.