I have a dataframe, i want to convert it to list,but data is change i dont know why enter image description here
-
3Welcome to StackOverflow @wubocheng. Please edit your question to include reproducible code (that we can copy/paste in a Python file to be able to test ourselves), an image of code is not handy. Please also include what you have tried so far, and what worked/what did not work. – Basj Dec 09 '20 at 09:40
-
4[Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – Guy Incognito Dec 09 '20 at 09:43
-
2Post the code and results as text, not images of the text. I'd guess what you see is only a formatting difference and the *actual* value really is -0.99998999999999 which, if displayed using only 6 fractional digits, would be `-0.0999990` – Panagiotis Kanavos Dec 09 '20 at 09:43
-
What is the *original* value? If this data comes from a CSV, what does the CSV look like? It's quite possible that the original value can't be represented exactly, resulting in floating point value close but not exactly the same – Panagiotis Kanavos Dec 09 '20 at 10:55
1 Answers
This looks to be only a formatting difference. -0.99998999999999
displayed using only 6 fractional digits would appear as -0.999990
. The underlying number doesn't change. The dataframe pretty-printer is using only 6 digits.
Floating point numbers can't represent every non-integer number exactly. On top of that, math operations can introduce rounding and scaling errors that affect eg the 10th or 15th digit. That's why comparing two floating point numbers for equality can fail. Instead one should check whether their absolute difference is below a threshold.
Update
The answer to this possibly duplicate question shows how to increase the dataframe's display precision with :
#temporaly set display precision
with pd.option_context('display.precision', 10):
print df
It also appears that Pandas may sacrifice precision for performance and :
Passing float_precision='round_trip' to read_csv fixes this.
From the read_csv docs:
float_precision: string, default None
Specifies which converter the C engine should use for floating-point values. The options are
None
for the ordinary converter,high
for the high-precision converter, andround_trip
for the round-trip converter.

- 120,703
- 13
- 188
- 236
-
Thanks for your answer, I find when i use read_csv has this problem, I set this column type is object and then astype to float will avoid this problem. – wubocheng Dec 09 '20 at 10:18
-
This looks like a difference in `__str__` (`__repr__`?) for 2 different objects – Sergey Bushmanov Dec 09 '20 at 10:20
-
@wubocheng it's not `read_csv` that has any problem. `read_csv` produces a dataframe. It's how the *dataframe* is displayed in the console. A different renderer displays the *same* value differently. If you multiplied that value by 1M you'd see it contains extra fractionals. And once again, floating point numbers *aren't* precise. It's quite possible that whatever value is stored in the CSV (which wasn't posted) can't be represented exactly – Panagiotis Kanavos Dec 09 '20 at 10:54