4

My app consists of web server (node.js), multiple workers (node.js) and Postgres database. Normally I would just create app on heroku with postgres addon and push app there with processes defined in Procfile.

However client wants the app to be delivered to his private server with docker. So the flow should look like this: I make some changes in my node.js app (in web server on workers), "push" changes to repo (docker hub?) and client when he is ready "pulls" changed app (images?) to his server and app (docker containers?) restart with new, updated code.

I am new to docker and even after reading few articles/tutorials I am not sure how I can use docker...

So ideally if there would be one docker image (in docker hub) which would contain my app code, database and client could just pull it somehow and just run it... Is it possible with docker?

user606521
  • 14,486
  • 30
  • 113
  • 204

2 Answers2

4

Standard strategy is to pack each component of your system into separate docker image (this is called a microservice architecture) and then create an "orchestration" - a set of scripts for deployment, start/stop and update.

For example:

  • deployment script pulls images from docker repo (Docker Hub or your private repo) and calls start script
  • start script just does docker run for each component
  • stop script calls docker stop for each component
  • update script calls stop script, then updates images from repo, then calls start script

There are software projects on the internet intended to simplify the orchestration, e.g. this SO answer has a comprehensive list. But usually plain bash scripts work just fine.

Community
  • 1
  • 1
Aleksei Petrenko
  • 6,698
  • 10
  • 53
  • 87
  • 2
    A docker image per service can get very cumbersome if you are building and distributing a full os image for every component of your system. It works really well for something like [go](https://golang.org/), where there can be no dependencies and tiny images, not so much for an app with multiple bloaty os package dependencies. You can run multiple supervised processes in a container. You can also distribute a single base image that can run all your individual services. – Matt Aug 11 '15 at 21:18
  • @Matt This is true, microservice architecture is not always convenient. But sometimes this can be beneficial, e.g. we run multiple python apps and thus we have common python base image for all of them -> docker image size is relatively small. – Aleksei Petrenko Aug 12 '15 at 13:35
  • Yep, small is good. Small and a common image is great. It's easy to slip into building, storing and distributing large images which basically just duplicate each other all for a very small difference in content. When you could probably do the same with a single image, but launched two different ways into two different container services. – Matt Aug 12 '15 at 17:58
  • 1
    FYI it looks like you forgot to include the link on your last paragraph. – blah238 Sep 11 '15 at 20:54
0

This post is quite old and things have changed a lot in between

I also need to provide clients with ~30 images for my product

I built a docker-compose file I can rely on which they can pull from SCM and deploy with a single command

Issues I have is that I want to set secrets per client so I am reading on CNAB and the Porter product which seems to solve my issue

It may help others who are struggling with the same need of running your application on the client's on-premise infrastructure

Maybe someone already used Porter and have a few cons and pros to share from their experienced point of view

Regards,

Alf
  • 31
  • 4