3

I want to create containers w/ a MySQL db and a dump loaded for integration tests. Each test should connect to a fresh container, with the DB in the same state. It should be able to read and write, but all changes should be lost when the test ends and the container is destroyed. I'm using the "mysql" image from the official docker repo.

1) The image's docs suggests taking advantage of the "entrypoint" script that will import any .sql files you provide on a specific folder. As I understand, this will import the dump again every time a new container is created, so not a good option. Is that correct?

2) This SO answer suggests extending that image with a RUN statement to start the mysql service and import all dumps. This seems to be the way to go, but I keep getting

mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

followed by

ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

when I run build, even though I can connect to mysql fine on containers of the original image. I tried sleep 5 to wait for the mysqld service to startup, and adding -h with 'localhost' or the docker-machine ip.

How can I fix "2)"? Or, is there a better approach?

Community
  • 1
  • 1
Victor Basso
  • 5,556
  • 5
  • 42
  • 60

4 Answers4

1

If re-seeding the data is an expensive operation another option would be starting / stopping a Docker container (previously build with the DB and seed data). I blogged about this a few months ago Integration Testing using Spring Boot, Postgres and Docker and although the blog focuses on Postgres, the idea is the same and could be translated to MySQL.

ootero
  • 3,235
  • 2
  • 16
  • 22
1

The standard MySQL image is pretty slow to start up so might be useful to use something that has been prepared more for this situation like this:

https://github.com/awin/docker-mysql

You can include data or use with a Flyway situation too, but it should speed things up a bit.

chrismacp
  • 3,834
  • 1
  • 30
  • 37
  • this worked for me and is the simplest solution. just note it is not based on the official mysql docker image, but for test situations that is probably ok. – Martin Charlesworth May 29 '19 at 22:27
0

How I've solved this before is using a Database Migration tool, specifically flyway: http://flywaydb.org/documentation/database/mysql.html

Flyway is more for migrating the database schema opposed to putting data into it, but you could use it either way. Whenever you start your container just run the migrations against it and your database will be setup however you want. It's easy to use and you can also just use the default MySQL docker container without messing around with any settings. Flyway is also nice for many other reasons, like having a way to have version control for a database schema, and the ability to perform migrations on production databases easily.

To run integration tests with a clean DB I would just have an initial dataset that you insert before the test, then afterwards just truncate all the tables. I'm not sure how large your dataset is, but I think this is generally faster than restarting a mysql container every time,.

Aaron Harrington
  • 532
  • 3
  • 13
0
  1. Yes, the data will be imported every time you start a container. This could take a long time.

  2. You can view an example image that I created https://github.com/kliewkliew/mysql-adventureworks https://hub.docker.com/r/kliew/mysql-adventureworks/ My Dockerfile builds an image by installing MySQL, imports a sample database (from a .sql file), and sets the entrypoint to auto-start MySQL server. When you start a container from this image, it will have the data pre-loaded in the database.

kliew
  • 3,073
  • 1
  • 14
  • 25