How does one calculate how small and how large a physical measurement can be such that the application will not incur over/underflow in representing the measurement as a double?
E.g. I take a few measurements of distance to a flat surface and I want to fit a plane to the data set. I want to figure out how close and how far away I can be from the surface in taking those measurements such that the results of the application are correct.
In my program, I'm reading the measurements into 3-tuples of double type, to represent points in R3. Desired precision is 2 or 3 decimal places.
Not sure where to start. . .
EDIT: I'm not trying to catch overflow; I'm trying to analyze for limits of the application.