186

I am trying to make sure that my app container does not run migrations / start until the db container is started and READY TO accept connections.

So I decided to use the healthcheck and depends on option in docker compose file v2.

In the app, I have the following

app:
    ...
    depends_on:
      db:
      condition: service_healthy

The db on the other hand has the following healthcheck

db:
  ...
  healthcheck:
    test: TEST_GOES_HERE
    timeout: 20s
    retries: 10

I have tried a couple of approaches like :

  1. making sure the db DIR is created test: ["CMD", "test -f var/lib/mysql/db"]
  2. Getting the mysql version: test: ["CMD", "echo 'SELECT version();'| mysql"]
  3. Ping the admin (marks the db container as healthy but does not seem to be a valid test) test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]

Does anyone have a solution to this?

John Kariuki
  • 4,966
  • 5
  • 21
  • 30
  • You created a docker for a DB ? Please tell me that your data is outside of this container for the sake of your application health – Jorge Campos Mar 02 '17 at 22:54
  • Or at least this is a test containter. – Jorge Campos Mar 02 '17 at 22:55
  • This is only for development/testing ONLY purposes actually. – John Kariuki Mar 02 '17 at 22:56
  • 5
    I think you should use a command to connect and run a query in mysql, none of the samples you provided do this: something like: `mysql -u USER -p PASSWORD -h MYSQLSERVERNAME -e 'select * from foo...' database-name` – Jorge Campos Mar 02 '17 at 23:00
  • Warning: With "version 3" of compose file, the "condition" support is not longer available. See https://docs.docker.com/compose/compose-file/#depends_on – BartoszK Oct 29 '18 at 04:47
  • @JorgeCampos could you give a more detailed explanation? I´m facing the same problem – Thadeu Melo Jan 04 '19 at 15:18
  • @JorgeCampos Why is having a database container bad? – S.. Jan 12 '20 at 09:36
  • @S.. well, back in the days, almost 3 years back, when I added that comment it use to not be a good idea. Containers were not very reliable and mostly because people would forget to leave the data out of the container... nowadays I don't really think it is valid anymore... that comment... – Jorge Campos Jan 13 '20 at 17:25
  • 2
    @JorgeCampos Okay thanks. Usually I have a db container, but map volumes for the data dir. So that if the container went down the data would persist to it's next instantiation. – S.. Jan 14 '20 at 09:20

18 Answers18

182
version: "2.1"
services:
    api:
        build: .
        container_name: api
        ports:
            - "8080:8080"
        depends_on:
            db:
                condition: service_healthy
    db:
        container_name: db
        image: mysql
        ports:
            - "3306"
        environment:
            MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
            MYSQL_USER: "user"
            MYSQL_PASSWORD: "password"
            MYSQL_DATABASE: "database"
        healthcheck:
            test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
            timeout: 20s
            retries: 10

The api container will not start until the db container is healthy (basically until mysqladmin is up and accepting connections.)

John Kariuki
  • 4,966
  • 5
  • 21
  • 30
  • 35
    `mysqladmin ping` will return a false positive if the server is running but not yet accepting connections. – halfpastfour.am May 31 '17 at 17:54
  • @BobKruithof I am facing the same issue... is there any work around, something like sleep or exit status for retry – Mukesh Agarwal Jul 12 '17 at 12:13
  • @BobKruithof got the solution – Mukesh Agarwal Jul 12 '17 at 13:14
  • 1
    @dKen see my answer below https://stackoverflow.com/a/45058879/279272, I hope it will work for you also. – Mukesh Agarwal Aug 03 '17 at 14:44
  • this keeps pinging, even when everything runs properly and spams the log. pretty sad that there is no other way around it right now, it seems. not to speak of that this is 2.x feature only – phil294 Mar 23 '18 at 12:50
  • @Blauhirn did you find a better way by now? – Philipp Kyeck Apr 05 '18 at 10:17
  • @pkyeck no i didnt. still looks like the most docker-like solution to me – phil294 Apr 05 '18 at 19:47
  • Warning: With "version 3" of compose file, the "condition" support is not longer available. See https://docs.docker.com/compose/compose-file/#depends_on – BartoszK Oct 29 '18 at 04:48
  • 20
    To check this using password: `test: ["CMD", 'mysqladmin', 'ping', '-h', 'localhost', '-u', 'root', '-p$$MYSQL_ROOT_PASSWORD' ]` - if you defined `MYSQL_ROOT_PASSWORD` in `environments` section. – laimison Jun 15 '19 at 13:29
  • 2
    Notice that with the separated Compose Spec "condition" has been added to "depends_on" again: https://github.com/compose-spec/compose-spec/blob/a4a7e7c/spec.md#long-syntax-1 You'll need Compose 1.27.0 or newer for this: https://github.com/docker/compose/releases/tag/1.27.0 – Mathias Brodala Jan 15 '21 at 11:19
  • 12
    I am using a Compose file with version 3.9, and the `condition` field works. – Sam Jones Mar 09 '21 at 13:52
  • @Mint it's not documented, but seems to be working fine. I wonder whether it's a lack of documentation or a feature which is going to be deprecated? – Kolyunya Jun 01 '21 at 15:38
  • 1
    Thank you @laimison. There appears to be an extra & so instead use `test: ["CMD", 'mysqladmin', 'ping', '-h', 'localhost', '-u', 'root', '-p$MYSQL_ROOT_PASSWORD' ]` if MYSQL_ROOT_PASSWORD is defined in environment variables – WhiteKnight Jun 23 '21 at 15:27
  • I'm getting access denied for provided user or root (both defined in the env vars) – JackTheKnife May 22 '22 at 19:28
  • -p$$MYSQL_ROOT_PASSWORD and -p$MYSQL_ROOT_PASSWORD don't works. – JRichardsz Aug 15 '23 at 19:30
50

condition was removed compose spec in versions 3.0 to 3.8 but is now back!

Using version of the compose spec v3.9+ (docker-compose v1.29), you can use condition as an option in long syntax form of depends_on.

Use condition: service_completed_successfully to tell compose that service must be running before dependent service gets started.

services:
  web:
    build: .
    depends_on:
      db:
        condition: service_completed_successfully
      redis:
        condition: service_completed_successfully
  redis:
    image: redis
  db:
    image: postgres

condition option can be:

  • service_started is equivalent to short syntax form
  • service_healthy is waiting for the service to be healthy. Define healthy with healthcheck option
  • service_completed_successfully specifies that a dependency is expected to run to successful completion before starting a dependent service (Added to docker-compose with PR#8122).

It is sadly pretty badly documented. I found references to it on Docker forums, Docker doc issues, Docker compose issue, in Docker Compose e2e fixtures. Not sure if it's supported by Docker Compose v2.

Capripot
  • 1,354
  • 16
  • 26
  • 4
    I'm using version 3.9, and the condition appears to work, despite what the documentation says. – Sam Jones Mar 08 '21 at 21:25
  • @SamJones The problem addressed here is that `depends_on` does not wait for service to be *ready* before starting the dependent service, because V3 does not support the `condition` form of `depends_on`. – Capripot Mar 09 '21 at 00:45
  • Right, and what I'm saying is that I can get one service to wait for another to be ready using the solution described in https://stackoverflow.com/a/42757250/1459532, in a Docker Compose file with `version` set to `3.9`. The documentation says it's not supported, but it still works. – Sam Jones Mar 09 '21 at 13:51
  • The `host:port` checking seems not enough for most applications, I found when the db port is ready(it passed the wait-for scripts), but it still failed. From the `docker-compose logs` I found the db instance is not ready for connection. And I added a hard code `sleep 10` to wait for the db, it worked. I think it must execute a ping query string on the database to enurse it is available. – Hantsy Aug 08 '21 at 04:44
  • 2
    `condition` is added back – leogoesger Nov 05 '21 at 19:31
  • 2
    Doesn't work for me, with service_completed_successfully, I mean the database initalization works but the main app isn't starting. Any suggestions? – PanZWarzywniaka Apr 24 '22 at 13:59
  • Hi @PanZWarzywniaka, were you able to make it work? – Ryan Aquino Jun 09 '22 at 06:22
  • 12
    I don't think this will ever work. Or ever worked. "service_completed_successfully" means that the app will wait until the specified service exits with 0 code. From the documentation: "service_completed_successfully: specifies that a dependency is expected to run to successful completion before starting a dependent service." – Dávid Szabó Jul 30 '22 at 13:56
37

This should be enough

version: '3.4'
services:
  mysql:
    image: mysql
    ports: ['3306:3306']
    environment:
      MYSQL_USER: myuser
      MYSQL_PASSWORD: mypassword
    healthcheck:
      test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
      start_period: 5s
      interval: 5s
      timeout: 5s
      retries: 55
Maksim Kostromin
  • 3,273
  • 1
  • 32
  • 30
  • 5
    whats the double `$` for? – InsOp Dec 13 '19 at 11:15
  • 12
    @InsOp special syntax you have to use in health check test command for escaping env variables starts with $, ie $$MYSQL_PASSWORD will result into $MYSQL_PASSWORD, which itself will result into mypassword in this concrete example – Maksim Kostromin Dec 14 '19 at 16:03
  • 1
    So with this im accessing the env variable inside the container? with a single `$` Im accessing the env variable from the host then i suppose? thats nice thank you! – InsOp Dec 16 '19 at 11:37
  • Umm why 55 retries? Seems arbitrary – mritalian Dec 19 '22 at 21:57
  • This command may be simplified by `mysqladmin ping --silent` (at least it working for me). However it is not working as healthy condition. I'm trying to run web application container on `service_healthy` and DB migration is immediately starting inside. And it fails. DB can't establish connection even if it healthy. It needs more time to completely start. – rzlvmp Mar 10 '23 at 09:55
  • @rzlvmp try use version: '2.1' then – Maksim Kostromin Mar 14 '23 at 14:11
  • @mritalian use value you need, it was example from my project – Maksim Kostromin Mar 14 '23 at 14:12
  • This should be marked as best answer because it uses `127.0.0.1` explicitly. If you omit the host or use `localhost` instead, the health check command could connect to the temporary service that `mysql` container brings up for initialization. At this moment your service is not actually ready. – hajimuz Sep 01 '23 at 13:21
  • @hajimuz I’m using it specifically to problem I was faced couple of times already, when some systems couldn’t handle localhost properly, unless you manually update /etc/hosts file while using 127.0.0.1 is not causing such ip resolution problems – Maksim Kostromin Sep 02 '23 at 17:22
24

Hi for a simple healthcheck using docker-compose v2.1, I used:

/usr/bin/mysql --user=root --password=rootpasswd --execute \"SHOW DATABASES;\"

Basically it runs a simple mysql command SHOW DATABASES; using as an example the user root with the password rootpasswd in the database.

If the command succeed the db is up and ready so the healthcheck path. You can use interval so it tests at interval.

Removing the other field for visibility, here is what it would look like in your docker-compose.yaml.

version: '2.1'

  services:
    db:
      ... # Other db configuration (image, port, volumes, ...)
      healthcheck:
        test: "/usr/bin/mysql --user=root --password=rootpasswd --execute \"SHOW DATABASES;\""
        interval: 2s
        timeout: 20s
        retries: 10

     app:
       ... # Other app configuration
       depends_on:
         db:
         condition: service_healthy
Sylhare
  • 5,907
  • 8
  • 64
  • 80
  • 4
    Warning: With "version 3" of compose file, the "condition" support is not longer available. See https://docs.docker.com/compose/compose-file/#depends_on – BartoszK Oct 29 '18 at 04:47
  • 1
    You should use [command](https://docs.docker.com/compose/compose-file/#command) feature together with [wait-for-it.sh](https://docs.docker.com/compose/startup-order/) script. Me doing it this way: `command: ["/home/app/jswebservice/wait-for-it.sh", "maria:3306", "--", "node", "webservice.js"]` – BartoszK Oct 31 '18 at 07:45
  • @BartoszKI don´t understand it. Could you please add a full answer with details? I´m facing the exact same problem, but I can´t make it work. – Thadeu Melo Jan 04 '19 at 15:27
  • Make sure you are using v2.1, otherwise follow the new guidelines for v3.0 and above. – Sylhare Jan 06 '19 at 14:19
  • 2
    `--execute \"SHOW DATABASES;\"` is what made it wait for me until the database was available for the application to access – Taku Jun 09 '19 at 04:35
  • 1
    "condition" seems to work again in v3 since [docker-compose v1.27.0](https://github.com/docker/compose/releases/tag/1.27.0). This health check worked for me with mysql 8.0 as `--execute="SHOW DATABASES;"` – Yarrow Apr 09 '21 at 17:59
13

Adding an updated solution for the healthcheck approach. Simple snippet:

healthcheck:
  test: out=$$(mysqladmin ping -h localhost -P 3306 -u foo --password=bar 2>&1); echo $$out | grep 'mysqld is alive' || { echo $$out; exit 1; }

Explanation: Since mysqladmin ping returns false positives (especially for wrong password), I'm saving the output to a temporary variable, then using grep to find the expected output (mysqld is alive). If found it will return the 0 error code. In case it's not found, I'm printing the whole message, and returning the 1 error code.

Extended snippet:

version: "3.8"
services:
  db:
    image: linuxserver/mariadb
    environment:
      - FILE__MYSQL_ROOT_PASSWORD=/run/secrets/mysql_root_password
      - FILE__MYSQL_PASSWORD=/run/secrets/mysql_password
    secrets:
      - mysql_root_password
      - mysql_password
    healthcheck:
      test: out=$$(mysqladmin ping -h localhost -P 3306 -u root --password=$$(cat $${FILE__MYSQL_ROOT_PASSWORD}) 2>&1); echo $$out | grep 'mysqld is alive' || { echo $$out; exit 1; }

secrets:
  mysql_root_password:
    file: ${SECRETSDIR}/mysql_root_password
  mysql_password:
    file: ${SECRETSDIR}/mysql_password

Explanation: I'm using docker secrets instead of env variables (but this can be achieved with regular env vars as well). The use of $$ is for literal $ sign which is stripped when passed to the container.

Output from docker inspect --format "{{json .State.Health }}" db | jq on various occasions:

Everything alright:

{
  "Status": "healthy",
  "FailingStreak": 0,
  "Log": [
    {
    {
      "Start": "2020-07-20T01:03:02.326287492+03:00",
      "End": "2020-07-20T01:03:02.915911035+03:00",
      "ExitCode": 0,
      "Output": "mysqld is alive\n"
    }
  ]
}

DB is not up (yet):

{
  "Status": "starting",
  "FailingStreak": 1,
  "Log": [
    {
      "Start": "2020-07-20T01:02:58.816483336+03:00",
      "End": "2020-07-20T01:02:59.401765146+03:00",
      "ExitCode": 1,
      "Output": "\u0007mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 \"No such file or directory\")' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!\n"
    }
  ]
}

Wrong password:

{
  "Status": "unhealthy",
  "FailingStreak": 13,
  "Log": [
    {
      "Start": "2020-07-20T00:56:34.303714097+03:00",
      "End": "2020-07-20T00:56:34.845972979+03:00",
      "ExitCode": 1,
      "Output": "\u0007mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: YES)'\n"
    }
  ]
}
Maxim_united
  • 1,911
  • 1
  • 14
  • 23
12

If you can change the container to wait for mysql to be ready do it.

If you don't have the control of the container that you want to connect the database to, you can try to wait for the specific port.

For that purpose, I'm using a small script to wait for a specific port exposed by another container.

In this example, myserver will wait for port 3306 of mydb container to be reachable.

# Your database
mydb:
  image: mysql
  ports:
    - "3306:3306"
  volumes:
    - yourDataDir:/var/lib/mysql

# Your server
myserver:
  image: myserver
  ports:
    - "....:...."
  entrypoint: ./wait-for-it.sh mydb:3306 -- ./yourEntryPoint.sh

You can find the script wait-for-it documentation here

nono
  • 1,462
  • 1
  • 13
  • 21
9

RESTART ON-FAILURE

Since v3 condition: service_healthy is no longer available. The idea is that the developer should implement mechanism for crash recovery within the app itself. However for simple use cases a simple way to resolve this issue is to use restart option.

If mysql service status causes your application to exited with code 1 you can use one of restart policy options available. eg, on-failure

version: "3"

services:

    app:
      ...
      depends_on:
        - db:
      restart: on-failure
Hamid Asghari
  • 5,751
  • 4
  • 24
  • 34
8

I had the same problem, I created an external bash script for this purpose (It is inspired by Maxim answer). Replace mysql-container-name by the name of your MySQL container and also password/user is needed:

bin/wait-for-mysql.sh:

#!/bin/sh
until docker container exec -it mysql-container-name mysqladmin ping -P 3306 -proot | grep "mysqld is alive" ; do
  >&2 echo "MySQL is unavailable - waiting for it... "
  sleep 1
done

In my MakeFile, I call this script just after my docker-compose up call:

wait-for-mysql: ## Wait for MySQL to be ready
    bin/wait-for-mysql.sh

run: up wait-for-mysql reload serve ## Start everything...

Then I can call other commands without having the error:

An exception occurred in driver: SQLSTATE[HY000] [2006] MySQL server has gone away

Output example:

docker-compose -f docker-compose.yaml up -d
Creating network "strangebuzzcom_default" with the default driver
Creating sb-elasticsearch ... done
Creating sb-redis              ... done
Creating sb-db                 ... done
Creating sb-app                ... done
Creating sb-kibana             ... done
Creating sb-elasticsearch-head ... done
Creating sb-adminer            ... done
bin/wait-for-mysql.sh
MySQL is unavailable - waiting for it... 
MySQL is unavailable - waiting for it... 
MySQL is unavailable - waiting for it... 
MySQL is unavailable - waiting for it... 
mysqld is alive
php bin/console doctrine:schema:drop --force
Dropping database schema...
[OK] Database schema dropped successfully!
COil
  • 7,201
  • 2
  • 50
  • 98
7

I modified the docker-compose.yml as per the following example and it worked.

  mysql:
    image: mysql:5.6
    ports:
      - "3306:3306"
    volumes:       
      # Preload files for data
      - ../schemaAndSeedData:/docker-entrypoint-initdb.d
    environment:
      MYSQL_ROOT_PASSWORD: rootPass
      MYSQL_DATABASE: DefaultDB
      MYSQL_USER: usr
      MYSQL_PASSWORD: usr
    healthcheck:
      test:  mysql --user=root --password=rootPass -e 'Design your own check script ' LastSchema

In my case ../schemaAndSeedData contains multiple schema and data seeding sql files. Design your own check script can be similar to following select * from LastSchema.LastDBInsert.

While web dependent container code was

depends_on:
  mysql:
    condition: service_healthy
Sylhare
  • 5,907
  • 8
  • 64
  • 80
Mukesh Agarwal
  • 392
  • 4
  • 13
7

condition is added back, so now you could use it again. There is no need for wait-for scripts. If you are using scratch to build images, you cannot run those scripts anyways.

For API service

api:
    build:
      context: .
      dockerfile: Dockerfile
    restart: always
    depends_on:
      content-db:
        condition: service_healthy
    ...

For db block

content-db:
    image: mysql:5.6
    restart: on-failure
    command: --default-authentication-plugin=mysql_native_password
    volumes:
      - "./internal/db/content/sql:/docker-entrypoint-initdb.d"
    environment:
      MYSQL_DATABASE: content
      MYSQL_TCP_PORT: 5306
      MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
    healthcheck:
      test: "mysql -uroot -p$MYSQL_ROOT_PASSWORD content -e 'select 1'"
      interval: 1s
      retries: 120
leogoesger
  • 3,476
  • 5
  • 33
  • 71
  • This solution worked for me in docker compose v3. I used the syntax `test: "mariadb --host=database --user=${MARIADB_USER} --password=${MARIADB_PASSWORD} -e 'SELECT 1;'"` – edwinbradford Jun 03 '23 at 19:07
6

You can try this docker-compose.yml:

version: "3"

services:

  mysql:
    container_name: mysql
    image: mysql:8.0.26
    environment:
      MYSQL_ROOT_PASSWORD: root
      MYSQL_DATABASE: test_db
      MYSQL_USER: test_user
      MYSQL_PASSWORD: 1234
    ports:
      - "3306:3306"
    volumes:
      - mysql-data:/var/lib/mysql
    healthcheck:
      test: "mysql $$MYSQL_DATABASE -u$$MYSQL_USER -p$$MYSQL_PASSWORD -e 'SELECT 1;'"
      interval: 20s
      timeout: 10s
      retries: 5

volumes:
  mysql-data:
yuen26
  • 871
  • 11
  • 12
3

Although using healthcheck together with service_healthyis a good solution, I wanted a different solution that doesn't rely on the health check itself.

My solution utilizes the atkrad/wait4x image. Wait4X allows you to wait for a port or a service to enter the requested state, with a customizable timeout and interval time.

Example:

services:
  app:
    build: .
    depends_on:
      wait-for-db:
        condition: service_completed_successfully
        
  db:
    image: mysql
    environment:
      - MYSQL_ROOT_PASSWORD=test
      - MYSQL_DATABASE=test

  wait-for-db:
    image: atkrad/wait4x
    depends_on:
      - db
    command: tcp db:3306 -t 30s -i 250ms

Explanation

The example docker-compose file includes the services:

  • app - this is the app that connects to the database once the database instance is ready
    • depends_on waits for wait-for-db service to complete successfully. (Exit with 0 exit code)
  • db - this is the MySQL service
  • wait-for-db - this services waits for the database to open its port
    • command: tcp db:3306 -t 30s -i 250ms - wait on the TCP protocol for 3306 port, with a timeout of 30 seconds, check the port every 250 milliseconds
Dávid Szabó
  • 2,235
  • 2
  • 14
  • 26
3

After going through other solutions, mysqladmin ping does not work for me. This is because mysqladmin will return a success error code (i.e 0) even if MySQL server has started but not accepting a connection on port 3306. For the initial start, MySQL server will start the server on port 0 to setup the root user and initial databases. This is why there is a false positive test.

Here is my healthcheck test:

test: ["CMD-SHELL", "exit | mysql -h localhost -P 3306 -u root -p$$MYSQL_ROOT_PASSWORD" ]

The exit | closes MySQL input prompt during a successful connection.

My Complete Docker Compose file:

version: '3.*'

services:
  mysql:
    image: mysql:8
    hostname: mysql
    ports:
      - "3306:3306"
    environment:
      - MYSQL_DATABASE=mydb
      - MYSQL_ALLOW_EMPTY_PASSWORD=1
      - MYSQL_ROOT_PASSWORD=mypass
    healthcheck:
      test: ["CMD-SHELL", "exit | mysql -h localhost -P 3306 -u root -p$$MYSQL_ROOT_PASSWORD" ]
      interval: 5s
      timeout: 20s
      retries: 30
  web:
    build: .
    ports:
      - '8000:8000'
    depends_on:
      mysql:
        condition: service_healthy
1

I'd like to provide one more solution for this, which was mentioned in one of the comments but not really explained:
There's a tool called wait-for-it, which is mentioned on
https://docs.docker.com/compose/startup-order/
How it works? You just specify the host and the port that script needs to check periodically if it's ready. If it is, it will execute the program that you provide to it. You can also specify for how long it should check whether the host:port is ready. As for me this is the cleanest solution that actually works.
Here's the snippet from my docker-compose.yml file.

version : '3'

services:

database: 
    build: DatabaseScripts
    ports:
        - "3306:3306"
    container_name: "database-container"
    restart: always

backend:
    build: backend
    ports: 
        - "3000:3000"
    container_name: back-container
    restart: always
    links:
        - database
    command : ["./wait-for-it.sh", "-t", "40", "database:3306", "--", "node", "app.js"]
    # above line does the following:
        # check periodically for 40 seconds if (host:port) = database:3306 is ready
        # if it is, run 'node app.js'
        # app.js is the file that is connecting with the db

frontend: 
    build: quiz-app
    ports:
        - "4200:4200"
    container_name: front-container
    restart: always

default waiting time is 20 seconds. More details about it can be found on
https://github.com/vishnubob/wait-for-it.

I tried it on 2.X and 3.X versions - it works fine everywhere.
Of course you need to provide the wait-for-it.sh to your container - otherwise it won't work.
To do so use the following code :

COPY wait-for-it.sh <DESTINATION PATH HERE>

I added it in /backend/Dockerfile, so it looks something like this :

FROM node
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
COPY wait-for-it.sh /usr/src/app
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD ["npm", "start"]

To check everything is working correctly, run docker-compose logs. After some time somewhere in the logs you should see the output similar to that :

<container_name> | wait-for-it.sh: waiting 40 seconds for database:3306
<container_name> | wait-for-it.sh: database:3306 is available after 12 seconds

NOTE : This solution was provided by BartoszK in previous comments.

Erg
  • 11
  • 1
  • I have tried to use this `wait-for-it` script to check the `host:port` of dependent services, but it still faield. It seems when port is ready for connection, but the db intance is still in progress. – Hantsy Aug 08 '21 at 04:38
1

No one answer worked for me:

  • Docker version 20.10.6
  • docker-compose version 1.29.2
  • docker-compose yml version: version: '3.7'
  • mysql 5.7
    • run script at container start : docker-entrypoint-initdb.d

Solution

Check for some word in last lines of mysql log which indicates me something like "Im ready".

This is my compose file:

version: '3.7'

services:
  mysql:
    image: mysql:5.7
    command: mysqld --general-log=1 --general-log-file=/var/log/mysql/general-log.log
    container_name: mysql
    ports:
     - "3306:3306"
    volumes:
     - ./install_dump:/docker-entrypoint-initdb.d
    environment:
      MYSQL_ROOT_PASSWORD: changeme
      MYSQL_USER: jane
      MYSQL_PASSWORD: changeme
      MYSQL_DATABASE: blindspot
    healthcheck:
          test: "cat /var/log/mysql/general-log.log | grep \"root@localhost on  using Socket\""
          interval: 1s
          retries: 120

  some_web:
    image: some_web
    container_name: some_web
    ports:
     - "80:80"
    depends_on:
        mysql:
            condition: service_healthy

Explanation

After several checks I was able to get the entire mysql log of the container.

docker logs mysql could be enough but I was not able to access to the docker log inside of healthcheck, so I had to dump the query log of mysql into a file with:

command: mysqld --general-log=1 --general-log-file=/var/log/mysql/general-log.log

After that I ran several times my mysql container to determine if log is the same. I found that last lines were always the same:

2021-08-30T01:07:06.040848Z    10 Connect   root@localhost on  using Socket
2021-08-30T01:07:06.041239Z    10 Query SELECT @@datadir, @@pid_file
2021-08-30T01:07:06.041671Z    10 Query shutdown
2021-08-30T01:07:06.041705Z    10 Query 
mysqld, Version: 5.7.31-log (MySQL Community Server (GPL)). started with:
Tcp port: 0  Unix socket: /var/run/mysqld/mysqld.sock
Time                 Id Command    Argument

Finally, after some attempts, this grep return just one match which corresponds to the end of mysql log after the execution of dumps in /docker-entrypoint-initdb.d:

cat /var/log/mysql/general-log.log | grep \"root@localhost on  using Socket\"

Words like started with or Tcp port: returned several matches (start, middle and at the end of log) so are not options to detect the end of starting mysql success log.

healthcheck

Happily, when grep found at least one match, it returns a success exist code (0). So use it in healthcheck was easy:

healthcheck:
  test: "cat /var/log/mysql/general-log.log | grep \"root@localhost on  using Socket\""
  interval: 1s
  retries: 120

Improvements

  • If someone knows how to get the docker logs mysql inside of healthchek it will be better than enable the query log
  • Handle when sql scripts returns an error.
JRichardsz
  • 14,356
  • 6
  • 59
  • 94
1

This worked for me:

version: '3'

services:

  john:
    build:
      context: .
      dockerfile: containers/cowboys/john/Dockerfile
      args:
        - SERVICE_NAME_JOHN
        - CONTAINER_PORT_JOHN
    ports:
      - "8081:8081" # Forward the exposed port on the container to port on the host machine
    restart: unless-stopped
    networks:
      - fullstack
    depends_on:
      db:
        condition: service_healthy
    links:
      - db

 db:
    build:
      context: containers/mysql
    environment:
      MYSQL_ROOT_PASSWORD: root
      MYSQL_USER: docker_user
      MYSQL_PASSWORD: docker_pass
      MYSQL_DATABASE: cowboys
    container_name: golang_db
    restart: on-failure
    networks:
      - fullstack
    ports:
      - "3306:3306"
    healthcheck:
      test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD

networks:
  fullstack:
    driver: bridge

// containers/mysql/Dockerfile

FROM mysql
COPY cowboys.sql /docker-entrypoint-initdb.d/cowboys.sql
R Sun
  • 1,353
  • 14
  • 17
1

Most of the answers here are correct at half.

I used mysqladmin ping --silent command and it was mostly good, but even if container becomes healthy it wasn't able to handle external requests. So I decided to switch to more complicated command and use container's external ip address to be sure that healthcheck is the same as real request will be:

services:
  my-mariadb:
    container_name: my-mariadb
    image: ${DB_IMAGE}
    environment:
      MARIADB_ROOT_PASSWORD: root_password
      MARIADB_USER: user
      MARIADB_PASSWORD: user_password
      MARIADB_DATABASE: db_name
    volumes:
      - ./db/dump.sql:/docker-entrypoint-initdb.d/dump.sql
    ports:
      - 3306:3306
    healthcheck:
      test: mysql -u"$${MARIADB_USER}" -p"$${MARIADB_PASSWORD}" -hmariadb "$${MARIADB_DATABASE}" -e 'SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES LIMIT 1;'
      interval: 20s
      timeout: 5s
      retries: 5
      start_period: 30s
    networks:
      my_network:
        aliases:
          - mariadb

Here is mySQL query is running via external hostname (mariadb)

Also test may be looks like

mysql -u"$${MARIADB_USER}" -p"$${MARIADB_PASSWORD}" -h"$$(ip route get 1.2.3.4 | awk '{print $7}' | awk /./)" "$${MARIADB_DATABASE}" -e 'SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES LIMIT 1;'

If you want to use IP address instead of hostname

When I used mysqladmin ping command, time while status changed to healthy was about 21 seconds, and after I switched to new command it raised to 41 seconds. That means that database needs extra 20 seconds to be finally configured and able to handle external requests.

rzlvmp
  • 7,512
  • 5
  • 16
  • 45
  • `mysqladmin` ping does not work for me either. I had to use a similar approach to yours. This is because `mysqladmin` will return a success error code even if MySQL server has started but not accepting connection on port 3306. Here is my healthcheck test: `test: ["CMD-SHELL", "exit | mysql -h localhost -P 3306 -u root -p$$MYSQL_ROOT_PASSWORD" ]` – Ahmad Daudu Sulaiman Mar 20 '23 at 19:29
0

For me it was both:

The MySQL image version and the enviroment variable SPRING_DATASOURCE_URL. If I remove the SPRING_DATASOURCE_URL it doesn´t work. Neither if I use MySQL:8.0 or above.

version: "3.9"

services:
  api:
    image: api
    build:
      context: ./api
    depends_on:
      - db
    environment:
      SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/api?autoReconnect=true&useSSL=false
    networks:
      - private
    ports:
      - 8080:8080
  db:
    image: mysql:5.7
    environment:
      MYSQL_DATABASE: "api"
      MYSQL_ROOT_PASSWORD: "root"
    networks:
      - private
    ports:
      - 3306:3306

networks:
  private:
juanmorschrott
  • 573
  • 5
  • 25