2

You need to trust me here: I spent 8 hours reading all the results of "pandas data frame to mysql" on google and stackoverflow.

I have the following table on mysql

mysql> select * from mytable; 
+------+---+---+---+------------+----------+----------+
| j    | x | y | z | t          | ID       | value    |
+------+---+---+---+------------+----------+----------+
| H    | 0 | 0 | 0 |          1 | 445432   |        0 |
+------+---+---+---+------------+----------+----------+
1 row in set (0.00 sec)

The types are [CHAR, FLOAT, FLOAT, FLOAT, INT, VARCHAR, INT]

I am trying to write a pandas data frame to the table, with to_sql and sqlalchemy.

I start like this:

import pymysql
from sqlalchemy import create_engine
import sqlalchemy

I tried to write and read, and everything works properly.

cnx = create_engine('mysql://me:pswd@host/db', echo=False)
connection = cnx.raw_connection()

And then:

from sqlalchemy import text
sqlq = text('select * from mytable')
result = cnx.execute(sqlq)
for row in result:
    print row

Correctly returns the contents of the table. Also,

sqli = text("insert into `mytable` value ('A', '0.',0.,0.,1,934034,0)")
insertres = cnx.execute(sqli)

Works properly.

IF I try to insert a data frame through pandas.to_mysql, all goes to madness.

I tried this:

test = pd.DataFrame(data = [('3', 0., 0., 0., 1, '534523', 0)])
test.columns = ['j', 'x', 'y', 'z', 't', 'ID', 'value']

And the following:

cnx = create_engine('mysql://me:pswd@host/db', echo=False)
connection = cnx.raw_connection()
test.to_sql('mytable', connection)

and

cnx = create_engine('mysql://me:pswd@host/db', echo=False)
connection = cnx.raw_connection()
test.to_sql('mytable', connection)

which both give the error:

DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': not all arguments converted during string formatting

and

cnx = create_engine('mysql://me:pswd@host/db', echo=False)
test.to_sql('mytable', connection, dtype = {'j': sqlalchemy.sql.sqltypes.CHAR(),
                                                  'x': sqlalchemy.sql.sqltypes.FLOAT(),
                                                  'y': sqlalchemy.sql.sqltypes.FLOAT(),
                                                  'z': sqlalchemy.sql.sqltypes.FLOAT(),
                                                  't': sqlalchemy.sql.sqltypes.INT(),
                                                  'ID': sqlalchemy.sql.sqltypes.VARCHAR(),
                                                  'value': sqlalchemy.sql.sqltypes.INT()})

which gives the error:

ValueError: j (CHAR) not a string (and similar if I add/remove fields to the dict)

I have no idea how to solve this. I tried to use the raw connection (no cursor), the engine, I tried to convert the columns to all possible types, everything.

mic
  • 927
  • 8
  • 17
  • 1
    [this](https://stackoverflow.com/questions/38145382/pandas-dataframe-to-mariadb-database-with-to-sql-and-sqlalchemy?rq=1) might be helpful. I've used Pandas read_sql but not yet tested to_sql – cardamom Jun 12 '17 at 22:46
  • I am using the latest version of pandas (0.20.2). However, I tried to use the raw connection and the "flavor" field, as per suggested link. No luck. – mic Jun 12 '17 at 23:12
  • Aaaand you were right. I was using the raw connection. Switched to the engine and it worked (the `pymysql` version). I could swear that I tried it 100 times before posting, but it would be useless. Thanks cardamom. – mic Jun 12 '17 at 23:23
  • Good to hear I was also using pymysql have not used any other sql connector.. – cardamom Jun 12 '17 at 23:25

0 Answers0