153

When trying to stop or restart a docker container I'm getting the following error message:

$ docker restart 5ba0a86f36ea
Error response from daemon: Cannot restart container 5ba0a86f36ea: [2] Container does not exist: container destroyed
Error: failed to restart containers: [5ba0a86f36ea]

But when I run

$ docker logs -f 5ba0a86f36ea

I can see the logs, so obviously the container does exist. Any ideas?

Edit:

sorry, I forgot to mention this:

When I run docker ps -a I see the container as up and running. However the application inside it is malfunctioning so I want to restart it, or just get a fresh version of that application online. But when I can't stop and remove the container, I also can't get a new application up and running, which would be listening to the same port.

peter
  • 14,348
  • 9
  • 62
  • 96
  • I would guess you can destroy a container but still have logs about it, when it has been destroyed. Otherwise your observation does not make sense. – ikrabbe Jul 12 '15 at 08:22
  • Do you want to run a fresh container with all the data and changes wiped away or do you want to get like important files out of the one you used? – Paul Jul 12 '15 at 08:23
  • This can happen if your docker image does not have proper process handling. – Burhan Khalid Jul 12 '15 at 09:35

15 Answers15

206

I couldn't locate boot2docker in my machine. So, I came up with something that worked for me.

$ sudo systemctl restart docker.socket docker.service
$ docker rm -f <container id>

Check if it helps you as well.

Abhishek Kashyap
  • 3,332
  • 2
  • 18
  • 20
97

All the docker: start | restart | stop | rm --force | kill commands may not work if the container is stuck. You can always restart the docker daemon. However, if you have other containers running, that may not be the option. What you can do is:

ps aux | grep <<container id>> | awk '{print $1 $2}'

The output contains:

<<user>><<process id>>

Then kill the process associated with the container like so:

sudo kill -9 <<process id from above command>>

That will kill the container and you can start a new container with the right image.

Teddy Belay
  • 1,525
  • 14
  • 12
45

That looks like docker/docker/issues/12738, seen with docker 1.6 or 1.7:

Some container fail to stop properly, and the restart

We are seeing this issue a lot in our users hosts when they upgraded from 1.5.0 to 1.6.0.
After the upgrade, some containers cannot be stopped (giving 500 Server Error: Internal Server Error ("Cannot stop container xxxxx: [2] Container does not exist: container destroyed")) or forced destroyed (giving 500 Server Error: Internal Server Error ("Could not kill running container, cannot remove - [2] Container does not exist: container destroyed")). The processes are still running on the host.
Sometimes, it works after restarting the docker daemon.

There are some workarounds:

I've tried all remote API calls for that unkillable container and here are results:

  • json, stats, changes, top, logs returned valid responses
  • stop, pause, wait, kill reported 404 (!)

After I finished with remote API, I double-checked docker ps (the container was still there), but then I retried docker kill and it worked! The container got killed and I could remove it.

Or:

What worked was to restart boot2docker on my host. Then docker rm -f

$ boot2docker stop
$ boot2docker start
$ docker rm -f 1f061139ba04
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • 1
    Thx, yes restarting the machine helped. Unfortunately it's a server and shouldn't be restarted too often, hope they'll fix the bug. As I have docker 1.7 – peter Jul 12 '15 at 09:34
  • I agree, this really is a workaround, not a full resolution. I will monitor that bug report. – VonC Jul 12 '15 at 09:36
  • 1
    I had an unhealthy container that I could not stop or kill : docker stop -f # helped, thanks! – sergpank Apr 15 '20 at 12:18
  • This is a very good suggestion. For me one channels connection was open to the container and I couldn't stop or kill the container. Then I simply closed the browser which was communicating with the container. Afterwards it was simple to stop and kill the container. – Sardar Faisal May 04 '22 at 09:31
15

Worth knowing:

If you are running an ENTRYPOINT script ... the script will work with the shebang

#!/bin/bash -x

But will stop the container from stopping with

#!/bin/bash -xe
halfer
  • 19,824
  • 17
  • 99
  • 186
danday74
  • 52,471
  • 49
  • 232
  • 283
13

Enjoy

sudo aa-remove-unknown

This is what worked for me.

FabianoLothor
  • 2,752
  • 4
  • 25
  • 39
  • 3
    Yes, this has successfully removed the infected container. – Soumyaansh Dec 07 '20 at 18:12
  • One should mention that you may need to reboot your machine after that since this will break a number of other things as well, especially snaps -- e.g., spotify. Just run `sudo aa-remove-unknown -n` for a dry-run first to see what all will be affected. – Christian Fritz May 18 '22 at 23:46
13

For anyone on a Mac who has Docker Desktop installed. I was able to just click the tray icon and say Restart Docker. Once it restarted was able to delete the containers.

Jerinaw
  • 5,260
  • 7
  • 41
  • 54
11

Check if there is any zombie process using "top" command.

docker ps | grep <<container name>> 

Get the container id.

ps -ef | grep <<container id>>

ps -ef|grep defunct | grep java

And kill the container by Parent PID .

ninohead
  • 325
  • 2
  • 6
9

If you're on a Mac and try this via Terminal: Use killall Docker to quit Docker.

Restart it in the Applications folder or with open /Applications/Docker.app.

Subsequently you can run a docker rm <id> for the concerned container.

moritzgvt
  • 406
  • 5
  • 11
5

I had the same problem on a windows host machine and none of the other options here worked for me. I ended up just needing to delete the physical container folder, which was located here:

C:\ProgramData\Docker\containers\[container guid]

I had stopped the docker service first just to be safe and when I restarted it, the broken containers were now gone and I was able to create new ones. I suspect the same will work on a linux host machine, but I do not know where the container folders are kept on that OS.

metalhead
  • 558
  • 1
  • 10
  • 24
4

Ubuntu Stop the container by using its system process ID. Get the main process ID using:

docker inspect -f '{{.State.Pid}}' container-id

This will return an id as ´25430´. Kill this with the command

sudo kill -9 25430

Sardar Faisal
  • 594
  • 5
  • 20
2

in my case, i couldn't delete container created with nomad jobs, there's no output for the docker logs <ContainerID> and, in general, it looks like frozen.

until now the solution is: sudo service docker restart, may someone suggest better one?

Artem Kozlenkov
  • 975
  • 9
  • 11
1

If you're on Ubuntu, make sure docker-compose isn't installed as a snap. This will cause all kinds of random issues, including the above.

Remove the snap:

sudo snap remove docker-compose

And install manually from the compose repository:

Docker compose installation instruction

Dmitriy
  • 525
  • 5
  • 11
1

i forgot that i had made the container start as a system service.
so if i stopped or killed the container, the service would bring it back.

if you are using systemctl, you can list all the running services with systemctl | grep running and find the name of the service.

then use sudo systemctl disable <your_service_name> to stop it.

Adrian C
  • 71
  • 3
-1

Sometimes this is caused by problem of the docker daemon. I solved the problem by restarting the docker service. On Linux:

systemctl restart docker
rjdkolb
  • 10,377
  • 11
  • 69
  • 89
蔡火胜
  • 49
  • 2
  • 3
-4

In my case, docker rm $(docker ps -aq) works for me.

Panda
  • 9
  • 1
  • 3
    ATTENTION: Be careful using this command! It will remove all of your docker containers (stopped! Unless you have your data stored in a volume running this command might cause unintended data loss! Furthermore this doesn't address the POs question. The question is about stopping and restarting a container. – Andru Nov 20 '21 at 14:49