0

I would like to run integration and end-to-end tests with a database in a known state for each run, to make the tests independent and repeatable. An easy way of doing this is to use docker-compose to create a database container which loads the scheme and data from a dump file each time. However, this is far too slow to restore the database for every test.

A better way seems to be to restore the database once in a docker container or volume, and then copy (mount?) the container/volume database folder into the database container that the test will use, and have each test re-copy/mount the container/volume so that it is fresh.

However, I am not sure what the best way to do this with docker-compose is. Could anyone provide a minimal example or explanation as to how to do this?

Birdie
  • 274
  • 2
  • 14

1 Answers1

2

You can start the database using a host directory for its underlying data store. If you do this, then you can create a tar file of the directory, and untar it anew for each test run.

mkdir mysql
docker run -d -p 3306:3306 -v ./mysql:/var/lib/mysql --name mysql mysql
mysql -h 127.0.0.1 < dump.sql
docker stop mysql
docker rm mysql
tar czf mysql.tar.gz mysql
rm -rf mysql
tar xzf mysql.tar.gz
docker run -d -p 3306:3306 -v ./mysql:/var/lib/mysql --name mysql mysql
MYSQL_HOST=127.0.0.1 ./integration_test
docker stop mysql
docker rm mysql

You'd have to distribute the data dump separately (if you otherwise use AWS, an S3 bucket is a good place for it) but since it's "just" test data that you can always recreate from a database dump, it's not especially precious and you don't need to track its version history or attempt to keep it in source control.

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • 1
    Is there any advantage to using tar rather than just copying the contents of /msql to /var/lib/mysql on the test container? – Birdie Sep 28 '18 at 01:26
  • Using `docker cp` in this situation is a little clunky (you'd need to split the `docker run` into a `docker create` and `docker start`, and `docker cp` between them). Beyond that, it'd work fine. – David Maze Sep 28 '18 at 10:39
  • Thanks for the help. How do I translate these commands to docker-compose commands (or can I not emulate it in docker-compose and do I need to use a Dockerfile instead)? I can get a docker container which uses the mysql image and docker-entrypoint-initdb.d to load the database dump, but using docker-compose v3 and volumes prevents the second container from accessing the volume, due to two mysqld processes running and trying to use the same data and log files. – Birdie Sep 30 '18 at 22:02
  • The second `docker run` command probably translates into something very similar to the `docker-compose.yml` you already have; you're just unpacking the saved tar file into the host directory named in the `volumes:` there. – David Maze Sep 30 '18 at 23:07
  • If [your other question](https://stackoverflow.com/questions/52582639/how-can-i-stop-a-mysql-docker-container-which-populates-a-volume-after-database) is related to this, it's also worth noting that I wouldn't put both halves of this answer in the same `docker-compose.yml` file. Make the database dump once, and then reference it in the `docker-compose.yml` file that runs the test. – David Maze Sep 30 '18 at 23:56