0

I have data that I am using df.to_sql() on to get into my postgres warehouse. I have used the following answer to handle the same error, but now all nulls are inputted as nan in my tables. Below is how I implemented the answer and some things that I have done that do not work.

Looking for any suggestions here, running into this with text columns

df = df.apply(
            lambda col: col.astype(str).str.replace("\x00", "")
            if col.dtype == "object"
            else col,
            axis=0,
        )

# added these before .to_sql()
# .replace({"": None})
# .fillna(np.nan)
# .replace({np.nan: None})

df.to_sql(name, con, schema, if_exists="append")
Ethan
  • 534
  • 5
  • 21
  • Add some dummy sample data and desired result, also what's the structure of target table "name" in PostgreSQL? – Pepe N O Oct 14 '22 at 18:56
  • @PepeNO not sure how this worked but I slapped a `.replace('nan', np.nan)` before the to_sql and it worked – Ethan Oct 18 '22 at 01:37

0 Answers0