0

I have initialized the database with image postgres alpine 14 through my sql file. Everything works fine. The idea is that I want to archive the database created or modified from the application with dockerhub. Tried to commit or create a new tag but when I pull back I still have to run the sql file to create the initial database. Things don't seem to have changed. Can someone help me out. Thanks. Here is my repo: https://github.com/ThanhDeveloper/NodeDocker/tree/DockerComposeApp https://hub.docker.com/repository/docker/thanhdeveloper/my_jp_idol_postgres_image/general

Origin database: enter image description here

Modify database with application enter image description here

My docker desktop: enter image description here

My wish is that the database pushed to the docker hub has 4 lines of data and no need to rerun the sql database file. When I pull back I get a postgres image with 4 lines of modified database. Can someone help me. Thank you

Thanh Nguyen
  • 308
  • 3
  • 12
  • To clarify, you want the _current state_ of the database to be persisted in the image so that image can be pushed to DockerHub? When you create a new container from the image, then modify the database's contents, then stop the container, publishing the image should include the new contents? – Zulfe Feb 04 '22 at 14:17
  • Yes, I want the current state of the database to be persisted in the image so that the image can be pushed to DockerHub. Then when I pull back I get the updated and modified image @Zulfe – Thanh Nguyen Feb 04 '22 at 14:21
  • In the above case I want to keep the image containing 4 lines in the database. And when I pull again I will have 4 lines, not 2 lines @Zulfe – Thanh Nguyen Feb 04 '22 at 14:23
  • 1
    Issue is that when you commit - data which is available in volumes doesn't go with image. So you have to export data using database provided tools & archive it to a location other than volume. – fly2matrix Feb 04 '22 at 14:26
  • I'm understanding what you mean I need to export the modified database by backing it up as a separate file xxx.backup and restore back ?. And this means I cannot commit my modified database with the previously created image. Right ? @fly2matrix – Thanh Nguyen Feb 04 '22 at 14:31
  • Right, You can export and store it in any location in container & when you commit such container then it will be in image. Next step would be do over-ride entrypoint to the image with a script that will restore this backup (if present) in the beginning. – fly2matrix Feb 04 '22 at 14:39

1 Answers1

1

I wanted to create a dedicated answer for @fly2matrix's proposed solution, which I agree with.

You are creating a container from an image. You could create several containers from the same image, so how would all of the database changes from n containers be persisted within the image from which they came?

You should attach a volume to the container. Configure the image (through the Dockerfile that you're using to build it) to periodically write the contents of the database to a SQL file in that volume.

Create a custom script that publishes the SQL file to some sort of object storage (e.g. AWS S3). When you are done with the container and want to persist the changes, run the script on the outputted SQL file found in the container's volume.

Update the Dockerfile to pull this SQL file down from your object storage solution and load it into Postgres. Note that this creates an image whose database load script is whatever it was at build-time.

Alternatively, write a script that pulls down this file when the container starts and then uses it to populate the Postgres instance.

Zulfe
  • 820
  • 7
  • 25