4

I'm trying to write a dataframe to a MySQL table but am getting a (111 Connection refused) error.

I followed the accepted answer here: Writing to MySQL database with pandas using SQLAlchemy, to_sql

Answer's code:

import pandas as pd
import mysql.connector
from sqlalchemy import create_engine

engine = create_engine('mysql+mysqlconnector://[user]:[pass]@[host]:[port]/[schema]', echo=False)
data.to_sql(name='sample_table2', con=engine, if_exists = 'append', index=False)

...and the create_engine() line worked without error, but the to_sql() line failed with this error:

(mysql.connector.errors.InterfaceError) 2003: Can't connect to MySQL server on 'localhost:3306' (111 Connection refused)

How I connect to my MySQL database / table is not really relevant, so completely different answers are appreciated, but given the deprecation of the MySQL 'flavor' in pandas 0.20.2, what is the proper way to write a dataframe to MySQL?

elPastor
  • 8,435
  • 11
  • 53
  • 81
  • Are you getting any other specifics about why connection refused? Could be entirely unrelated to Python, Pandas, and SQLAlchemy – openwonk Jul 05 '17 at 20:18
  • No other specifics - however I've been using MySQLdb as my connector to `read_sql()` without issue. Could it be a difference in read / write permissions? – elPastor Jul 05 '17 at 20:52
  • Permissions could be it (e.g. if you only have read permissions). Can you trying creating a simple table using another interface (e.g. MySQL Workbench or Sequel Pro)? – openwonk Jul 05 '17 at 21:21
  • Yeah, I've been creating databases, tables, writing to tables, etc, all using MySQLdb... and reading using `read_sql()`. I have root access, so I should be able to write, but it may be how I setup the call that is somehow not recognizing that access. – elPastor Jul 05 '17 at 21:40
  • @openwonk - thanks for your help, I got a tip from Andy Hayden that solved the issue. Couldn't tell you why, but it worked. Going to post as the answer. – elPastor Jul 05 '17 at 22:10
  • 1
    @openwon - and after all that, I'm realizing how slow `to_sql()` is for a data set of any relevant size and am chasing the `LOAD DATA INFILE` method... – elPastor Jul 05 '17 at 22:30
  • Ya, Pandas not meant for more i/o stuff like that... Glad to see you got it to work! – openwonk Jul 06 '17 at 18:39

1 Answers1

6

Thanks to a tip from @AndyHayden, this answer was the trick. Basically replacing mysqlconnector with mysqldb was the linchpin.

engine = create_engine('mysql+mysqldb://[user]:[pass]@[host]:[port]/[schema]', echo = False)
df.to_sql(name = 'my_table', con = engine, if_exists = 'append', index = False)

Where [schema] is the database name, and in my particular case, :[port] is omitted with [host] being localhost.

elPastor
  • 8,435
  • 11
  • 53
  • 81
  • Putting it here so that other people can avoid: On @elPastor answer engine = create_engine('mysql+mysqldb://root:password@localhost:3306/mydbname', echo = False) – JD Nayak Sep 18 '20 at 13:40