I have data that I am using df.to_sql()
on to get into my postgres warehouse. I have used the following answer to handle the same error, but now all nulls are inputted as nan
in my tables. Below is how I implemented the answer and some things that I have done that do not work.
Looking for any suggestions here, running into this with text columns
df = df.apply(
lambda col: col.astype(str).str.replace("\x00", "")
if col.dtype == "object"
else col,
axis=0,
)
# added these before .to_sql()
# .replace({"": None})
# .fillna(np.nan)
# .replace({np.nan: None})
df.to_sql(name, con, schema, if_exists="append")