2

For a study I deployed on my computer a cloud architecture using Docker. (Nginx for the load balancing an some Apache servers to run a simple Php application.

I wanted to know if it was possible to use several computers to deploy my containers in order to increase the power available.

(I'm using a MacBook Pro with Yosemite. I've installed boot2docker with Virtual box)

John Saunders
  • 160,644
  • 26
  • 247
  • 397
Tom Giraudet
  • 265
  • 1
  • 4
  • 9

2 Answers2

3

Disclosure: I was a maintainer on Swarm Legacy and Swarm mode

Edit: This answer mentions Docker Swarm Legacy, the first version of Docker Swarm. Since then a new version called Swarm mode was directly included in the docker engine and behaves a bit differently in terms of topology and features even though the big ideas remain.

Yes you can deploy Docker on multiple machines and manage them altogether as a single pool of resources. There are several solutions that you can use to orchestrate your containers on multiple machines using docker.

You can use either Docker Swarm, Kubernetes, Mesos/Marathon or Fleet. (there might be others as this is a fast-moving area). There are also commercial solutions like Amazon ECS.

In the case of Swarm, it uses the Docker remote API to communicate with distant docker daemons and schedule containers according to the load or some extra constraints (other systems are similar with more or less features). Here is an example of a small Swarm deployments.

                                Docker CLI
                                    +   
                                    |     
                                    |        
                                    | 4000 (or else)    
                                    | server
                           +--------v---------+   
                           |                  |          
              +------------>   Swarm Manager  <------------+     
              |            |                  |            |    
              |            +--------^---------+            |  
              |                     |                      |     
              |                     |                      |  
              |                     |                      |     
              |                     |                      |  
              | client              | client               | client  
              | 2376                | 2376                 | 2376   
              |                     |                      |      
    +---------v-------+    +--------v--------+    +--------v--------+     
    |                 |    |                 |    |                 |    
    |   Swarm Agent   |    |   Swarm Agent   |    |   Swarm Agent   |    
    |     Docker      |    |     Docker      |    |     Docker      |       
    |     Daemon      |    |     Daemon      |    |     Daemon      |  
    |                 |    |                 |    |                 |          
    +-----------------+    +-----------------+    +-----------------+

Choosing one of those systems is basically a choice between:

  • Cluster deployment simplicity and maintenance
  • Flexibility of the scheduler
  • Completeness of the API
  • Support for running VMs
  • Higher abstraction for groups of containers: Pods
  • Networking model (Bridge/Host or Overlay or Flat network)
  • Compatibility with the Docker remote API

It depends mostly on the use case and what kinds of workload you are running. For more details on the differences between those systems, see this answer.

abronan
  • 3,309
  • 4
  • 29
  • 38
1

That sounds like clustering, which is what docker swarm does (see its github repo).

It turns a pool of Docker hosts into a single, virtual host.

See for example issue 247: How replication control and load balancing being taken care of?

VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250