During my studies, I came across the fact that Docker containers don't support neither sudo nor systemd services. Not that I need these tools but I'm just curious about the topic and couldn't find an adequate reasoning.
-
1`sudo` is a program that you can install with a package manager. Once you install it, you can use it. – jkr Jul 26 '20 at 19:44
-
Of course but why isn't it included in the official images as it is with their standard distributions. – Onat Girit Jul 26 '20 at 19:47
-
@OnatGirit because a lot of images use `root` as default use, so I think, `sudo` is optional (to make image smaller) – Exploding Kitten Jul 26 '20 at 19:51
-
And consider that since the container itself hasn't root access and even cannot see other processes in the system, it cannot cause any harm to your system by any means even if all the processes within the container have root access.So there's no need to limit that already-limited environment with non-root users.And when there is only one user,there's no need to have ```sudo```. – Parsa Mousavi Jul 26 '20 at 19:55
-
It's crystal clear right now, thank you all :) – Onat Girit Jul 26 '20 at 20:01
-
@Parsa, your assertions above are only true given a hypothetical bug-free OS kernel. Such a thing doesn't actually exist, which is why shared-kernel containerization provides far worse security than real hardware virtualization. It's still safer to restrict access to root in a container, because root has more access to potential attack surface (to use to try to exploit kernel bugs). – Charles Duffy Jul 26 '20 at 20:12
-
@CharlesDuffy I admit that OS kernels like other softwares have vulnerabilities but the users in a container are also virtualized like other resources(see [user-namespace](https://www.man7.org/linux/man-pages/man7/user_namespaces.7.html)) so [**root** in the container doesn't mean **root** in the host](https://vsupalov.com/docker-shared-permissions/).So if there's a vulnerability that a non-root user can insert malicious modules into the kernel,so can a virtualized process,with or without root user.Otherwise,no.But please tell me if I've gotten the permissions concept wrong. – Parsa Mousavi Jul 26 '20 at 20:27
-
@CharlesDuffy However it's still safer to use **real-hardware** virtualization as you mentioned,because in that scenario a malicious process cannot affect the host processes even when it exploits the kernel vulnerabilities. – Parsa Mousavi Jul 26 '20 at 20:36
3 Answers
Docker is aimed at being minimal, since there can be many, many containers running at the same time. The idea is to reduce memory and disk usage. Since containers already run as root to begin with unless otherwise specified, there's no need to have sudo
. Also, since most containers only ever run one process, there's no need for a service manager like systemd
. Even if they did need to run more than one process, there are smaller programs like supervisord
.

- 33,825
- 1
- 29
- 55
sudo
is unnecessary in Docker. A container generally runs a single process, and if you intend it to run as not-root, you don't generally want it to be able to become root arbitrarily. In a Dockerfile, you can use USER
to switch users as many times as you'd like; outside of Docker, you can use docker run -u root
or docker exec -u root
to get a root shell no matter how the container is configured.
Mechanically, sudo
is bad for non-interactive environments (especially, it's very prone to asking for a user password) and users in Docker aren't usually configured with passwords at all. The most common recipe I see involves echo plain-text-password | passwd user
, in a file committed to source control, and also easily retrieved via docker history
; this is not good security practice.
systemd
is unnecessary in Docker. A container generally runs a single process, so you don't need a process manager. Running systemd
instead of the process you're trying to run also means you don't get anything useful from docker logs
, can't use Docker restart policies effectively, and generally miss out on the core Docker ecosystem.
systemd
also runs against the Unix philosophy of "make each program do one thing well". If you look at the set of things listed out on the systemd home page it sets up a ton of stuff; much of that is system-level things that belong to the host (swap, filesystem mounts, kernel parameters) and other things that you can't run in Docker (console getty
processes). This also means you usually can't run systemd
in a container without it being --privileged
, which in turn means it can interfere with this system-level configuration.
There are some good technical reasons to run a dedicated init process in Docker, but a lightweight single-process init like tini
is a better choice.

- 130,717
- 29
- 175
- 215
Beside what @Aplet123 mentioned,consider that since the containers themselves don't have root access and even cannot see other processes in the system(unless created by the --ipc
option), they cannot cause any harm to your system by any means even if all the processes within the container have root access.So there's no need to limit that already-limited environment with non-root users.And when there is only one user,there's no need to have sudo
.
Also starting and stopping the containers as services can be done by docker itself,so the docker daemon(which itself has been started via systemd
) is in fact the Master SystemD for all containers.So there's no need to have systemd too for example when you want to start your apache HTTP server.

- 1,052
- 1
- 13
- 31