I have a table that I use to record measurements, all of which are increments of 0.1. The first column is just an index for the measurement, the second column is the measurement itself, then two additional columns to account for adjustments that may be needed, and another column that adds or subtracts the adjustments to the original measurement (for reference the header for this column is "Corrected Elevation"). From corrected Elevation, I calculate the most occuring measurement, "=MODE.SNGL(tbRecordedSurvey[Corrected Elevation])", and then use that value as a datum (the cell name of which is "DATUM") to adjust all the other measurements too so that my most occuring measurement is zero and I can see how far I deviate from the mode. The header for this last column is "Datum Adjust. Elevation" and the entire column has the formula "=[@[Corrected Elevation]]-DATUM". So if my intial measurement was 7.1, my most occuring measurement was 7.0, I would expect my Datum Adjust. Elevation to be 0.1. This is how it appears in the actual column.
However,
I need the data from the "Datum Adjust. Elevation" column to add to a database, so I'm using VBA to read the data. For some reason, when using an array or just a range, every instance of 0.1 or -0.1 will be read into my range or array as 9.99999996e-2, or -9.999999996e-2. This doesnt happen for any other number. If I go in and delete my formula from the column and just replace it with 0.1 then I have no issues, but that defeats the whole point of using a table in the first place. I've checked my calculation steps for each cell involved in the formula and none of them have this issue.
If I create a range in vba and set it to the specific cell as a test I get [@[Corrected Elevation]]-DATUM = 9.99999999999996E-02
But if I create ranges to check Corrected Elevation and DATUM, they read into the range just fine as 7.1 and 7.0 respectively. So its seems like all of the data going it is fine, but the result isnt, and it is only for a result of 0.1 or -0.1.
Any advice?