4

Generally it is recommended to run a single process per docker container. And that makes sense if you are trying to run a single web application that requires different kinds of tools.

For example, the open source web application kanboard makes use of

  • mysql
  • apache
  • php5
  • memcache

Now if that was the only web application I was going to run, then it makes sense to run each tool in a separate container to take advantage of dockers one process per container.

But say, instead of running only one web application I wanted to run multiple web applications,

  • kanboard
  • etherpad
  • plex
  • owncloud
  • dockuwiki
  • discourse

Now how can I use docker to isolate those web applications? The reason I ask is because each application mentioned above might have its own,

  • backend data store (mysql, postgres, sqlite)
  • cache store (memcache, redis)
  • concurrent task management (celery, queues, RQ, SHARQ)
  • web server (nginx, apache)
  • search server (lucene, sphinx, opensearchserver)

There are 2 ways to use docker to run those web applications. 2 ways that I know off,

  • Run each application along with all of it's dependencies in a single container. one for kanboard, one for etherpad and so on.
  • Adhere to dockers dictum of one process per container and create one for mysql, postgres, sqlite, memcache and so on and one for each application code itself and use docker linking to link the related containers together. This is more messy. Lot more organizing and management required.

My question is if there is any other way? And if there isn't which of the above options should I choose and why?

Or maybe am I using the wrong tool(docker containers) for the job? Perhaps there is another way accomplishing application isolation without using docker containers?

Jay
  • 1,083
  • 1
  • 10
  • 19

3 Answers3

1

Your second approach is preferred in principle. Tools like docker compose might help you in fighting the messiness of the linking.

Mykola Gurov
  • 8,517
  • 4
  • 29
  • 27
0

You can run multiple processes per container.
You simply need to use a base image able to manage all those processes end of life (see "PID 1 zombie reaping issue"). Use a base image which knows how to do that: phusion/baseimage-docker

You will then have one container per webapp (with all its dependent processes)

Check if you can put in common some of those processes in their container of their own.

Typically, NGiNX could run in only one additional container, making reverse proxy to all your other webapps, allowing to access them through the same url (url/discourse would redirect to the container managing discourse, url/plex to the one for plex, and so on)

Community
  • 1
  • 1
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
0

You say:

This is more messy. Lot more organizing and management required.

I think it's complety the other way round. Here are my pros and cons:

multi-process:

pros

  • One Dockerimage/-container per App

cons

  • you really need to make sure, that every process is monitored correctly by your init script (which is run as CMD or ENTRYPOINT). If anything fails you'll end up with a failing container
  • hard work to get everything running correctly inside the containers, e.g. database, redis and so on should be run and setup before the app.
  • No way to update just one component without taking down everything.
  • no way to scale vertically: You'll need two frontends for performance reasons running with one database? No chance with this approach
  • every Image has to be developed AND maintained by yourself. No chance to use vendor images
  • if one component needs an update, you'll have to update the whole thing

I had exactly the same task recently and I decided to go the 2nd appraoch due to the following reasons:

pros:

  • You can use vendor images (e.g. postgres, wordpress, whatever) no need of making your own one
  • scales vertically: Need to webservers, use two containers of that image

cons:

  • a little more clumsy, if done by hand.

I really recommend the 2nd approach. With tools like the mentioned docker-compose you'll "build" your app out of different containers configured in one single docker-compose.yml

If you then use a tool like https://github.com/jwilder/nginx-proxy (I did, works like a charm) even the reverse proxying is a simple thing and you can run X different software on one host.

This way we set up our jenkins, redmine, cms and many more things for our company. Hope this helps you with your decision.

Julian Kaffke
  • 428
  • 4
  • 10