0

Sorry for the long title.

But i have a csv file I'm reading into a Panda's Dataframe.

This value is say 567.188, once its in my df its being read as 567.188e9 then when I insert into the a Postgres SQL V13 using the df.to_sql() in my postgres db I now see 567.1880000001

Why is there a 1 at the end that is breaking all my joins.

Nash
  • 23
  • 7
  • Excuse my rushed question. Ill clean it up in a bit. But the main idea is there. – Nash Jul 21 '22 at 07:12
  • 1
    What data type is that column? If it's `float` or `double precision` this [is expected](https://floating-point-gui.de/) –  Jul 21 '22 at 07:25
  • See also [Is floating point math broken?](https://stackoverflow.com/q/588004/5320906). – snakecharmerb Jul 21 '22 at 08:01
  • @a_horse_with_no_name its a numeric on the DB and the csv is defining the col_type as decimal. – Nash Jul 21 '22 at 08:36
  • @snakecharmerb I'm not done reading but I see where there issue could be, thanks for this. – Nash Jul 21 '22 at 08:38
  • 1) CSV files are text so all values are strings. 2) Postgres is probably not the culprit: `select '567.188'::numeric;` or `select '567.188'::float`; both yield `567.188` with `show extra_float_digits ;` set to the default of 1. 3) Why use a dataframe? Instead use `psycopg2` [Copy](https://www.psycopg.org/docs/usage.html#using-copy-to-and-copy-from) methods directly. – Adrian Klaver Jul 21 '22 at 15:20

0 Answers0