417

If you take a look at Docker's features, most of them are already provided by LXC.

So what does Docker add? Why would I use Docker over plain LXC?

Jacek Laskowski
  • 72,696
  • 27
  • 242
  • 420
Flimm
  • 136,138
  • 45
  • 251
  • 267

5 Answers5

559

From the Docker FAQ:

Docker is not a replacement for lxc. "lxc" refers to capabilities of the linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations.

On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities:

  • Portable deployment across machines. Docker defines a format for bundling an application and all its dependencies into a single object which can be transferred to any docker-enabled machine, and executed there with the guarantee that the execution environment exposed to the application will be the same. Lxc implements process sandboxing, which is an important pre-requisite for portable deployment, but that alone is not enough for portable deployment. If you sent me a copy of your application installed in a custom lxc configuration, it would almost certainly not run on my machine the way it does on yours, because it is tied to your machine's specific configuration: networking, storage, logging, distro, etc. Docker defines an abstraction for these machine-specific settings, so that the exact same docker container can run - unchanged - on many different machines, with many different configurations.

  • Application-centric. Docker is optimized for the deployment of applications, as opposed to machines. This is reflected in its API, user interface, design philosophy and documentation. By contrast, the lxc helper scripts focus on containers as lightweight machines - basically servers that boot faster and need less ram. We think there's more to containers than just that.

  • Automatic build. Docker includes a tool for developers to automatically assemble a container from their source code, with full control over application dependencies, build tools, packaging etc. They are free to use make, maven, chef, puppet, salt, debian packages, rpms, source tarballs, or any combination of the above, regardless of the configuration of the machines.

  • Versioning. Docker includes git-like capabilities for tracking successive versions of a container, inspecting the diff between versions, committing new versions, rolling back etc. The history also includes how a container was assembled and by whom, so you get full traceability from the production server all the way back to the upstream developer. Docker also implements incremental uploads and downloads, similar to "git pull", so new versions of a container can be transferred by only sending diffs.

  • Component re-use. Any container can be used as an "base image" to create more specialized components. This can be done manually or as part of an automated build. For example you can prepare the ideal python environment, and use it as a base for 10 different applications. Your ideal postgresql setup can be re-used for all your future projects. And so on.

  • Sharing. Docker has access to a public registry (https://registry.hub.docker.com/) where thousands of people have uploaded useful containers: anything from redis, couchdb, postgres to irc bouncers to rails app servers to hadoop to base images for various distros. The registry also includes an official "standard library" of useful containers maintained by the docker team. The registry itself is open-source, so anyone can deploy their own registry to store and transfer private containers, for internal server deployments for example.

  • Tool ecosystem. Docker defines an API for automating and customizing the creation and deployment of containers. There are a huge number of tools integrating with docker to extend its capabilities. PaaS-like deployment (Dokku, Deis, Flynn), multi-node orchestration (maestro, salt, mesos, openstack nova), management dashboards (docker-ui, openstack horizon, shipyard), configuration management (chef, puppet), continuous integration (jenkins, strider, travis), etc. Docker is rapidly establishing itself as the standard for container-based tooling.

starball
  • 20,030
  • 7
  • 43
  • 238
Solomon Hykes
  • 20,377
  • 2
  • 14
  • 7
  • 3
    When you say, "any container can be used as a base image", I presume you mean a Docker container, not an LXC container created independently from Docker. As far as I can tell, one can't create a Docker container from scratch, it must always inherit from another Docker container (related question: http://stackoverflow.com/questions/18274088/how-can-i-make-my-own-base-image-for-docker). – Flimm Aug 16 '13 at 13:13
  • 20
    You can easily create a new container from any tarball with "docker import". For example: "debootstrap raring ./rootfs; tar -C ./rootfs -c . | docker import flimm/mybase". – Solomon Hykes Aug 20 '13 at 15:47
  • Thanks. I've added your answer to [that question](http://stackoverflow.com/questions/18274088/how-can-i-make-my-own-base-image-for-docker), I hope you don't mind. – Flimm Aug 20 '13 at 16:58
  • The command above should be: `tar -C rootfs/ -c . | docker import - TAG:REPOSITORY`. The dash after `import` is missing. Probably a typo. – dawud Mar 15 '14 at 19:48
  • 3
    is this still true now that Docker's got libcontainer (that its not a replacement)? – That Realty Programmer Guy May 01 '14 at 05:22
  • 3
    @GaretClaborn yes, since libcontainer is just their own library to access namespaces and cgroups, everything Solomon said still applies. – John Morales May 09 '14 at 19:49
  • 2
    The most glaring mistake here of course is that LXC is actually a set of userspace tools (see https://linuxcontainers.org/), and as far as I can tell, Docker does not build on LXC but is more of an alternative to it. The rest of the answer seems to be based on that misunderstanding. – aij Jun 15 '14 at 22:14
  • 1
    @aij From what I read, Docker offers an abstraction on top of the LXC API which is overly complex. It's not an alternative. Without LXC, there would be no Docker - thus the requirements of running on a Linux Kernel. Even things like CoreOS are paired down versions of the linux kernel. – henry74 Jul 22 '14 at 17:44
  • So can you say that Docker is a subset of LXC? Or more like a wrapper around it? – sargas Dec 22 '14 at 02:44
  • 10
    A Linux container is the result of constraining and isolating a process using a set of Linux facilities: chroot, cgroups, and namespaces. LXC is a userspace tool that manipulates those facilities. libcontainer is an alternative to LXC that manipulates those same facilities. Docker uses libcontainer by default but can use LXC instead. That said, Docker is (much) more than a compatibility layer on top of libcontainer/LXC; it adds additional features that the other answers have listed. – user100464 Feb 01 '15 at 21:10
74

Let's take a look at the list of Docker's technical features, and check which ones are provided by LXC and which ones aren't.

Features:

1) Filesystem isolation: each process container runs in a completely separate root filesystem.

Provided with plain LXC.

2) Resource isolation: system resources like cpu and memory can be allocated differently to each process container, using cgroups.

Provided with plain LXC.

3) Network isolation: each process container runs in its own network namespace, with a virtual interface and IP address of its own.

Provided with plain LXC.

4) Copy-on-write: root filesystems are created using copy-on-write, which makes deployment extremely fast, memory-cheap and disk-cheap.

This is provided by AUFS, a union filesystem that Docker depends on. You could set up AUFS yourself manually with LXC, but Docker uses it as a standard.

5) Logging: the standard streams (stdout/stderr/stdin) of each process container is collected and logged for real-time or batch retrieval.

Docker provides this.

6) Change management: changes to a container's filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.

"Templating or manual configuration" is a reference to LXC, where you would need to learn about both of these things. Docker allows you to treat containers in the way that you're used to treating virtual machines, without learning about LXC configuration.

7) Interactive shell: docker can allocate a pseudo-tty and attach to the standard input of any container, for example to run a throwaway interactive shell.

LXC already provides this.


I only just started learning about LXC and Docker, so I'd welcome any corrections or better answers.

Flimm
  • 136,138
  • 45
  • 251
  • 267
  • 36
    IMHO, this answer misses the point. Docker doesn't "provide" those features; it merely makes them trivially easy to use. If we want to be nitpicky, we can say that LXC doesn't provide isolation: *namespaces* provide it, and LXC is just a commodity userland tool to make them easier to use than with the basic `unshare` tool (or directly the `clone()` syscall). Likewise, Docker makes those things easier to use (and brings many more features on the table, like the ability to push/pull images). My 2c. – jpetazzo Aug 13 '13 at 17:21
  • 6
    @jpetazzo: LXC is actually pretty easy, how does Docker make it easier (besides adding other features like pushing and pulling images)? – Flimm Aug 14 '13 at 07:19
  • 32
    @Flimm: I like the comparison in issue 16 of the [Admin Magazine](http://www.admin-magazine.com/), p. 34: *Docker bundles LXC together with some other supporting technologies and wraps it in an easy-to-use command-line interface. Using containers is a bit like trying to use Git with just commands like `update-index` and `read-tree`, without familiar tools like `add`, `commit`, and `merge`. Docker provides that layer of "porcelain" over the "plumbing" of LXC, enabling you to work with higher level concepts and worry less about the low-level details.* – 0xC0000022L Sep 09 '13 at 20:07
  • 4
    I ran UnixBench benchmarks inside a docker container and LXC container, running the same OS, and LXC has excelled in score. Being docker based on LXC, I am very puzzled about my results. – gextra Nov 25 '13 at 14:40
  • 7
    It appears to me that the slower performance of Docker was related to disk I/O, therefore maybe caused by the adoption of AUFS. – gextra Nov 25 '13 at 14:46
  • 2
    LXC can use OverlayFS instead AUFS for snapshoting or like that. This is almost the same as in Docker from the box. – ipeacocks Dec 01 '15 at 15:22
17

The above post & answers are rapidly becoming dated as the development of LXD continues to enhance LXC. Yes, I know Docker hasn't stood still either.

LXD now implements a repository for LXC container images which a user can push/pull from to contribute to or reuse.

LXD's REST api to LXC now enables both local & remote creation/deployment/management of LXC containers using a very simple command syntax.

Key features of LXD are:

  • Secure by design (unprivileged containers, resource restrictions and much more)
  • Scalable (from containers on your laptop to thousand of compute nodes)
  • Intuitive (simple, clear API and crisp command line experience)
  • Image based (no more distribution templates, only good, trusted images) Live migration

There is NCLXD plugin now for OpenStack allowing OpenStack to utilize LXD to deploy/manage LXC containers as VMs in OpenStack instead of using KVM, vmware etc.

However, NCLXD also enables a hybrid cloud of a mix of traditional HW VMs and LXC VMs.

The OpenStack nclxd plugin a list of features supported include:

stop/start/reboot/terminate container
Attach/detach network interface
Create container snapshot
Rescue/unrescue instance container
Pause/unpause/suspend/resume container
OVS/bridge networking
instance migration
firewall support

By the time Ubuntu 16.04 is released in Apr 2016 there will have been additional cool features such as block device support, live-migration support.

bmullan
  • 364
  • 2
  • 6
4

Dockers use images which are build in layers. This adds a lot in terms of portability, sharing, versioning and other features. These images are very easy to port or transfer and since they are in layers, changes in subsequent versions are added in form of layers over previous layers. So, while porting many a times you don't need to port the base layers. Dockers have containers which run these images with execution environment contained, they add changes as new layers providing easy version control.

Apart from that Docker Hub is a good registry with thousands of public images, where you can find images which have OS and other softwares installed. So, you can get a pretty good head start for your application.

div
  • 573
  • 5
  • 10
  • When you say "built in layers" - what does it mean - (A) A copy of the base layers, adapted and committed to a "NEW" layer. So, the base layer is disconnected from the next one? (B) The base layer(s) is/are included in the "NEW" layer and also linked. So, changes to the base layer are automatically reflected on to the "NEW" layer. Sorry, if the clarification sought is too naive. :( Kapil – Kapil Sep 11 '16 at 09:23
  • Docker images are built in layers. To put in granular terms all changes upto a point when a layer is commited are present in layers of image made until that point. Any changes made after that are added to next and above layers. So, new layer is linked to the base layer. I dont think that the same new layer can be added to a different base layer with additional changes. However, if multiple entities want to maintain consistency and have a same base layers, then only the new layers need to be given to these entities to reach the same state. – div Sep 13 '16 at 07:41
  • However, I am not updated on the current developments on docker and there may be changes to the docker image implementation that are not covered in above comment. – div Sep 13 '16 at 07:44
  • To be more specific, layers are identified by a signature (SHA-something, I believe) which means that if you change a layer, _it is a different layer._ @Kapil: That means that while its behavior is somewhat closer to your option (B), you actually can't make changes to a base layer. (or any layer, for that matter) An image is built out of a list of layers, each applied in order; layers can be cleaned up (and I think they automatically are cleaned up by docker itself) when no longer needed; i.e., when all referencing images have been deleted. – codermonkeyfuel Oct 18 '16 at 08:41
  • @Kapil: Honestly, your question would probably work best as a new question, instead of as a comment on this one, since it's a useful one for people to be able to look up on its own. If you want to ask it as a new question, I'll answer there too. – codermonkeyfuel Oct 18 '16 at 08:43
0

Going to keep this pithier, this is already asked and answered above .

I'd step back however and answer it slightly differently, the docker engine itself adds orchestration as one of its extras and this is the disruptive part. Once you start running an app as a combination of containers running 'somewhere' across multiple container engines it gets really exciting. Robustness, Horizontal Scaling, complete abstraction from the underlying hardware, i could go on and on...

Its not just Docker that gives you this, in fact the de facto Container Orchestration standard is Kubernetes which comes in a lot of flavours, a Docker one, but also OpenShift, SuSe, Azure, AWS...

Then beneath K8S there are alternative container engines; the interesting ones are Docker and CRIO - recently built, daemonless, intended as a container engine specifically for Kubernetes but immature. Its the competition between these that I think will be the real long term choice for a container engine.

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129