It is impossible to know what tolerance to use to accept unequal numbers as equal without knowing what calculation errors can exist in those numbers and what is acceptable for the purpose of the application.
It is possible that a few simple arithmetic operations will produce infinite error, and it is also possible that millions of arithmetic operations will produce a result with no error. Calculating what error may have occurred has to be done individually for a computation; there is no general rule. There is not even a general rule for the type of error that is acceptable: Some calculations result in errors that are proportional to the results (relative errors), some result in errors that are absolute, and some result in errors that are complicated functions of data that might not even be present in the values being examined. So even a routine that compares with relative error given a parameter for the amount of error is insufficient for general use.
Additionally, accepting unequal numbers as equal reduces false negatives (situations where numbers that would have been equal if calculated with exact mathematics are unequal because approximate arithmetic was used) at the expense of increasing false positives (accepting numbers as equal even though they are actually unequal). Some applications can tolerate this. Some cannot.
If you want more guidance, you need to explain further what you are doing and what your goals are.