There is no standard way, because whether or not you want to treat a small number as if it were zero depends on how you computed the number and what it's for. This in turn depends on the expected size of any errors introduced by your computations, and perhaps on errors of physical measurement that determined your original inputs.
For example, suppose that your value represents the length of a journey in miles in some mapping software. Then you are happy to treat 1e-7
as equal to zero because in that context it is a very small number: it has come about because of a rounding error or other reason for slight inexactness.
On the other hand, suppose that your value represents the size of a molecule in metres in some electron microscopy software. Then you certainly don't want to treat 1e-7
as equal to zero because in that context it's a very large number.
You should first consider what would be a suitable accuracy to present your value: what's the error bar, or how many significant figures can you reasonably display. This will give you some idea with what tolerance it would be appropriate to test against zero, although it still might not settle the case. For the mapping software, you can probably treat a journey as zero if it's less than some fixed value, although the value itself might depend on the resolution of your maps. For the microscopy software, if the difference between two sizes is such that zero lies with the 95% error range on those measurements, that still might not be sufficient to describe them as being the same size.