What is the largest precision error on a positive integer greater than 2^53 stored in a double? In other words, for all positive integers from 2^53+1 to max(double) what would be the largest precision difference between the actual integer and what the double value would be.
A little background on why I'm asking: I'm reading SNMP counters from a pubsub and writing them to BigQuery. The counters are UINT64 but BigQuery's data type for integer is INT64. So I'm currently using FLOAT in my BQ schema. It would not be a problem for my use case if the counters were off by some value in the hundreds. Unless there is another alternative on the BQ side (that doesn't involve using strings!)