0

I'm trying to read double values from a file into a std::vector. However, I've noticed that the values do not match what is on the file, but has a slight but consistent shift of ~0.000000007. Is this normal?

#include <iostream>
#include <fstream>
#include <iomanip>
#include <string>
#include <vector>


std::vector< double > readFromFile(const std::string& file_name) {
    std::vector< double > values;
    std::ifstream file(file_name.data(), std::ios::in);
    file.precision(9);   // makes no difference
    double value = 0.0;
    if (file.is_open()){
        while (file >> value) {
            values.push_back(value);
        }    
        file.close();
    }
    return values;
}

int main(int argc, char const *argv[]) {

    auto vals = readFromFile("src/samples.txt");

    std::cout.precision(9);
    for (auto& val : vals) {
        std::cout <<  std::fixed << val << std::endl;
    }

    return 0;
}

My samples.txt:

1595519203.966806166
1595519204.000087015
1595519204.033377640
1595519204.066651098

And the code output:

1595519203.966806173
1595519204.000087023 
1595519204.033377647
1595519204.066651106

This is something I have never dealt with, and am left wondering if the issue is in how the file is read, or in how I am printing it.

joaocandre
  • 1,621
  • 4
  • 25
  • 42
  • You could always view the data in memory with a debugger. Hint, the numbers you have printed are within the tolerance of `double` is at least 15 digits of decimal precision (according to IEEE 754). Try using `long double` if `double` is insuffcient ? – Mansoor Sep 17 '20 at 23:37

1 Answers1

2

In the range of 10^10, 64-bit IEEE 754 double-precision floating point numbers have a precision of about 10^-6. So, yes, this inconsistency is normal and cannot be avoided.

If you want higher precision, then you might want to try using long double. Some platforms provide hardware or software support for 80-bit or 128-bit floating point numbers. However, on some platforms (such as the Microsoft C++ compiler), long double is equivalent to 64-bit double.

If also long double is not sufficient, then you will have to revert to using a software bignum library. This will probably be slower, but allows for arbitrary precision.

Andreas Wenzel
  • 22,760
  • 4
  • 24
  • 39