6

I am using pandas (0.20.3) and python 3.5.3

I have error like this

mysql.connector.errors.ProgrammingError: 1054 (42S22): Unknown column 'nan' in 'field list'

I thought it is because of mysql doesn't understand 'nan' as mull.

According to this article

The problem was fixed in pandas 0.15.0

However I still have this error. is there something wrong with my cord??

Or where should I fix??

stocksDf = pd.read_csv('companys.csv', names=['name','place'])

for i,row in stocksDf.iterrows():
    sql = "insert into CompanyUs(name,place) VALUES(%s,%s)" 
    data = (row['name'],row['place'])
    cur.execute(sql,data)
    pprint("Company Write : %s" % row['name'])
    conn.commit()
whitebear
  • 11,200
  • 24
  • 114
  • 237
  • 1
    You are not using `to_sql` from `pandas`, but `execute` from other MYSQL API, so it is not the same situation with the referred article and is not related to the version of `pandas`. `stocksDf = stocksDf.astype(object).where(pd.notnull(stocksDf), None)` may help. – Huang Jul 30 '17 at 03:09
  • @Huang Good answer! – Dondon Jie Mar 27 '19 at 09:15
  • @Huang Thanks you very much! it works for me! – GeekHades Feb 26 '20 at 09:21

4 Answers4

4

The article linked in the question is referring to DataFrame.to_sql() which you are not using in your code. If you want to maintain this way of writing to the database, you need to change the NaNs in your DataFrame:

As explained in this question, the solution is, to change all NaN-values to None:

stocksDf = stocksDf.where((pd.notnull(stocksDf)), None)

Further important annotation from the original answer:

This changes the dtype of all columns to object.

J_Scholz
  • 476
  • 3
  • 12
  • thanks, I've been struggling with this some error for the all afternoon. Your code fixed my error. – Peter Dec 20 '19 at 00:09
1

Pandas read_csv function can put (at least) two types of nulls into your data, numpy.float64('nan') and numpy.nan. There are cases where one is acceptable and the other isn't. This sounds like one of those cases. Use the debugger to determine what the type() of the data is. You can then 'fix' the data with (DataFrame.replace)[http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html#pandas-dataframe-replace]

verisimilidude
  • 475
  • 6
  • 9
1

Your data might have newline characters or escape sequences anywhere. This is considered as 'nan' by MySQL. So manually adjust such a situation or use another pieces of code from pandas to remove escaped characters

Trinity
  • 13
  • 3
-1

I have seen this error too. NaN was assigned to a Double variable if it was calculated in a function that divides by zero. When attempting to store this value to the database it returns that error. Solution:

Add an if clause before dividing by zero:

if (value2 = 0) then
{CalValue = 0}
else
{Calcvalue = 20/value2}
Roel
  • 1