17

I have a question about docker and postgres. I am setting up a new postgres database every time docker is starting up and want to import a given dump.

My Problem is like, but the answers are not sufficient for me: Docker postgres does not run init file in docker-entrypoint-initdb.d

Docker-Compose:

postgres:
  environment:
   - POSTGRES_USER=****
   - POSTGRES_PASSWORD=****
   - POSTGRES_DB=****
build:
  context: .
  dockerfile: dockerfile-postgres

My Dockerfile: (I tried it already with a script with .sh ending)

FROM postgres
ADD dump.sql /docker-entrypoint-initdb.d/

According to https://hub.docker.com/_/postgres/ dump.sql must be used to import the database.

Starting up the application with docker only gives:

postgres_1     | LOG:  invalid record length at 0/1708600
postgres_1     | LOG:  redo is not required
postgres_1     | LOG:  MultiXact member wraparound protections are now enabled
postgres_1     | LOG:  database system is ready to accept connections
postgres_1     | LOG:  autovacuum launcher started

Besides I tested if my database has been imported, there is no table in my database. What am I doing wrong (files are read- and executable on target system)? Importing it with psql is no problem, so my dump is correct.

I hope you can help me and I want to thank you in advance for it.

Community
  • 1
  • 1
Simon
  • 605
  • 1
  • 6
  • 18
  • Hope to remeber it right, but `ADD` only runs magic, when being a URL as in [Docker COPY vs ADD](http://stackoverflow.com/questions/24958140/docker-copy-vs-add) discussed. Would you not want to run `pr_restore` and give this the path to your `dump.sql` copy? ... but maybe `pg_restore` does not work with containerized variant (as the comment - 18 days old - on the postgres docker hub page suggests) – Dilettant Jun 15 '16 at 11:51
  • Yeah i thought about Copy to and tried it, same result. The file is correctly in target. I already tried with a pqdump and an sh script in this folder. The script should have only called: pg_restore -d databasename /tmp/dump.backup. The pg_restore itself worked on target machine, but the script is not being executed – Simon Jun 15 '16 at 11:58
  • It is pg_restore what you want I guess - not dump what is in the DB but fill back in, right? – Dilettant Jun 15 '16 at 11:59
  • The sql file or importing the dump is not the problem, my script imported it correctly, if executed manually. My problem is docker won't execute it – Simon Jun 15 '16 at 12:00
  • Sorry, to suggest the following, but ...did you try to name it `init.sql`instead of `dump.sql` already ...? That might be the level of magic, that could lead to exec in such a magic folder. – Dilettant Jun 15 '16 at 12:07

3 Answers3

18

Okay I found the trick, I have to execute "docker-compose rm" in order to execute the scripts and sql files within this folder. Once built and not removed the init folder is being ignored.

UPDATE I ran into this problem again, this time removing the images docker created resolved the problem.

Simon
  • 605
  • 1
  • 6
  • 18
  • 1
    Just a note that you may also need to do `docker-compose build` after you run `docker-compose rm` if you've altered the `ADD`ed scripts. – user101289 Sep 21 '16 at 22:40
  • life saver... even doing a clean of all docker images and layers did not work, but this did. – drewboswell Jan 30 '17 at 11:49
  • as this seems to be a common problem or research, it would be much appreciated if this answer goes a bit more in details – Pipo May 06 '21 at 18:05
-1

It is happening because most probably

The initdb scripts docker-entryfile.sh which is then responsible for running /docker-entrypoint-initdb.d/* files in alphabetical order runs for the first time only when the volumes are being created.

You can see some details here .

Solution ( works with me ) This is snippet from my docker-compose file

  postgres_service:
    build:
      context : docker-postgres
      dockerfile: Dockerfile-base
    image: 'datahub/postgres:development'
    user: postgres
    ports:
      - "5432:5432"
    env_file:
      - credentials/postgres/development.env
    volumes:
      - /Users/yogesh.yadav/DockerData/datahub/postgresql/data:/var/lib/postgresql/data
    restart: unless-stopped
    networks:
      - datahubnetwork

Steps -

1) docker-compose -f docker-compose-filename.yml down

or

docker-compose -f docker-compose-filename.yml stop postgres_service

2) Remove the volume/volumes ( /Users/yogesh.yadav/DockerData/datahub/postgresql/data ) attached with postgres_service docker service. You can do this manually or via docker-compose rm or docker volume rm. Please identify your volume which are attached to that postgres service before removing it. More info here

3) Docker uses very good cache management via building images and running them. If you did not modify your Dockerfile , docker is going to run the already build image from the cache and run it. So I would advise to remove the images for postgres_service as well.

To List images

docker images -a

Output -

REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
datahub/postgres          development         e7707e670ad4        35 minutes ago      265 MB

Remove that image ( Use -f if required )

docker rmi IMAGE_ID_HERE

4) Restart your service again

docker-compose -f docker-compose-filename.yml up --build postgres_service

This time you can see that your docker-entrypoint.sh and dump.sql will execute.

Yogesh Yadav
  • 4,557
  • 6
  • 34
  • 40
-1

The answers here somewhat solved the problem for me, but the final step to get the docker-entrypoint-initdb/*.sql scripts working was to ensure there were no syntax issues in the sql scripts themselves (I had issues because I changed SQL versions in my Dockerfile).

If any exist, everything in the docker-entrypoint-initdb/*.sql scripts seem to get rolled back.

To check if there are any syntax script issues (or other issues) you may find it useful to look at the docker logs:

$ docker ps -all

Note: the -all ensures it lists those images that may have failed to start up too.

Find the image you just created and look for the container id which should be a 12 character hash:

$ docker logs baf32ff7ec03

Remember you still need to follow the other answers here - which are to rm the previously built images and to delete your data folder (back it up if you need to).

alexkb
  • 3,216
  • 2
  • 30
  • 30