217

I today deployed an instance of MediaWiki using the appcontainers/mediawiki docker image, and I now have a new problem for which I cannot find any clue. After trying to attach to the mediawiki front container using:

docker attach mediawiki_web_1

which answers Terminated on my configuration for a reason I ignore, trying also:

docker exec -it mediawiki_web_1 bash

I do get something close to an error message:

Error response from daemon: Container 81c07e4a69519c785b12ce4512a8ec76a10231ecfb30522e714b0ae53a0c9c68 is restarting, wait until the container is running

And there is my new problem, because this container never stop restarting. I can see that using docker ps -a which always returns a STATUS of Restarting (127) x seconds ago.

The thing is, I am able to stop the container (I tested) but starting it again seems to bring it back into its restarting loop.

Any idea what could be the issue here ? The whole thing was properly working until I tried to attach to it...

I am sad :-(

Balessan
  • 2,456
  • 2
  • 12
  • 20
  • I had success by completely deleting my entire Docker cache, using https://forums.docker.com/t/how-to-delete-cache/5753/2 (I also added the -f tag to rmi). Then I rebuilt my containers and they worked. – alberto56 Nov 17 '16 at 21:55
  • For me it wasn't enough to delete containers and images (as described in @alberto56's link), I also had to delete the associated volume. Once I did that, I was back in business. – Katie Byers Aug 18 '19 at 07:14

17 Answers17

325

The docker logs command will show you the output a container is generating when you don't run it interactively. This is likely to include the error message.

docker logs --tail 50 --follow --timestamps mediawiki_web_1

You can also run a fresh container in the foreground with docker run -ti <your_wiki_image> to see what that does. You may need to map some config from your docker-compose yml to the docker command.

I would guess that attaching to the media wiki process caused a crash which has corrupted something in your data.

Matt
  • 68,711
  • 7
  • 155
  • 158
  • Result of the command you provided which I guess is getting the last 50 logs related to the container is the following: `2016-05-26T16:38:27.362409489Z * Stopping web server apache2 * 2016-05-26T21:49:11.376549083Z Terminated 2016-05-26T21:49:11.688655642Z /bin/bash: /tmp/.runconfig.sh: No such file or directory`, so you're right, there something corrupted in the data as the runconfig.sh seems to have disappeared. I will try to run the container once more from in the foreground as you advised. Just need to find how to specify the 25 proper arguments ^^ – Balessan May 27 '16 at 09:13
  • 13
    Thanks, running a fresh container did the job. Docker was supposed to ease my deployment but for now it is a big failure :-) I probably need to learn and try more... – Balessan May 27 '16 at 10:04
  • I was pulling my hair out trying to get MySQL working. `docker ps -a` showed me that it was stuck in a boot loop and your command showed me why: files already in the mysql directory that it could not delete. You saved my from hours more of pulling my hair out. Thanks! – Blizzardengle Mar 27 '20 at 02:24
  • Thanks, actually there are some requirements by the container itself, and it helped out debugging what is missing, great help – justnajm Feb 19 '22 at 14:00
46

When docker kill CONTAINER_ID does not work and docker stop -t 1 CONTAINER_ID also does not work, you can try to delete the container:

docker container rm CONTAINER_ID

I had a similar issue today where containers were in a continuous restart loop.

The issue in my case was related to me being a poor engineer.

Anyway, I fixed the issue by deleting the container, fixing my code, and then rebuilding and running the container.

Hope that this helps anyone stuck with this issue in future

Bastian Voigt
  • 5,311
  • 6
  • 47
  • 65
Giannis Katsini
  • 1,209
  • 10
  • 13
  • 5
    I had put bad code in my application and in my docker compose file i added `restart: always` which left me in a loop of docker trying to start a broken app.. :( – Giannis Katsini Jan 08 '19 at 11:46
  • 1
    Ah, I see you're a poor engineer, I too am a poor engineer and had `exit()` in my code by accident. – jscul Nov 24 '21 at 22:36
13

In my case i removed

Restart=always

added

tty: true

And executed the below command to open shell (daemon process, because docker reads the compose file and stops the container once it reaches the last line of the file).

docker-compose up -d
vijayraj34
  • 2,135
  • 26
  • 27
11

I Had this issue because I was in a docker swarm try:

docker swarm leave --force
Miguel Mota
  • 20,135
  • 5
  • 45
  • 64
Alex Skotner
  • 462
  • 1
  • 8
  • 18
10

tl;dr It is restarting with a status code of 127, meaning there is a missing file/library in your container. Starting a fresh container just might fix it.

Explanation:

As far as my understanding of Docker goes, this is what is happening:

  1. Container tries to start up. In the process, it tries to access a file/library which does not exist.
  2. It exits with a status code of 127, which is explained in this answer.
  3. Normally, this is where the container should have completely exited, but it restarts.
  4. It restarts because the restart policy must have been set to something other than no (the default), (using either the command line flag --restart or the docker-compose.yml key restart) while starting the container.

Solution: Something might have corrupted your container. Starting a fresh container should ideally do the job.

meshde
  • 427
  • 6
  • 11
7

This could also be the case if you have created a systemd service that has:

[Service]
Restart=always
ExecStart=/usr/bin/docker container start -a my_container
ExecStop=/usr/bin/docker container stop -t 2 my_container
tread
  • 10,133
  • 17
  • 95
  • 170
6

From personal experience it sounds like there is a problem within your docker container that is not allowing it to restart. So some process within the container is causing the restart to hang or some process is causing the container to crash on start.

When you start the container make sure you start it detached "-d" if you are going to attach to it. (ex. "docker run -d mediawiki_web_1")

iam10k
  • 822
  • 1
  • 8
  • 11
  • I assume running the container using docker-compose detached it anyway, no ? Or the -d argument is missing in my config file. will check that. – Balessan May 27 '16 at 09:08
2

In my case nginx container was keep on restarting , I checked logs of nginx container and came to know .crt and .key file of a unrequired domain are having errors , so I removed respective .conf file , .crt and .key and then restarted nginx . That's it nginx is working fine without restarting .

Lakshmi
  • 412
  • 3
  • 6
2

I had the same problem for a bit after deploying the code to the prod server after a long period of running it in dev the problem was that in my docker-compose.yml file I didn't specify a tag for the mongo image, by default it pulled the latest, and since I wanted to keep the data path there was a mismatch between mongo versions on dev it was 4.4.3 and in prod it pulled the latest (i guess 5.x) the solution for me was to specify the image as mongo:4.4.3 instead of just mongo

I didn't want to go down the path of upgrading the DB

Massaynus
  • 322
  • 2
  • 9
1

I had forget Minikube running in background and thats what always restarted them back up

nuicca
  • 708
  • 6
  • 14
1

First check the logs why the container failed. Because your restart policy might bring your container back to running status. Better to fix the issue, Then probably you can build a new image with/without fix. Later execute below command

docker system prune

https://forums.docker.com/t/docker-registry-in-restarting-1-status-forever/12717/3

Nagendran
  • 277
  • 2
  • 7
  • You're absolutely right that it's better to fix the issue then the other answers given here. What would help is to show how one can find the logs since the container keeps restarting, so the logs are gone. Simply removing the restart parameter won't help either because the logs will still be gone when the container fails. If you update this with that info, I think you would get more upvotes. – Sevak Avakians Jan 06 '22 at 15:23
1

I deleted all folders inside the dockers folder and rebuild all images again, its works for me.

 docker-compose up -d --build

and

docker-compose up -d
0

Check the partition where you have installed docker. In most cases, the partition is at 100% capacity so you may need to look into that.

0

I just tested and removed --restart always and its works for me.

0

I fixed this on my pi4, without understanding how.

emby/embyserver_arm32v7:latest - kept restarting, whether I stopped, removed restarted container, used :beta

Tried then with ghcr.io/linuxserver/docker-emby/emby:arm32v7-version-4.6.0.3

It didn't keep restarting, but it didn't work either.

stop, rm, then retry with emby/embyserver_arm32v7:latest - now it works.

No idea why.

docker run -d --restart unless-stopped --volume /path/to/programdata:/config --volume /mnt/mydrive:/mnt/share1 --publish 8096:8096 --publish 8920:8920 --env UID=1000 --env GID=100 --env GIDLIST=100 ghcr.io/linuxserver/docker-emby/emby:arm32v7-version-4.6.0.3
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community May 28 '22 at 12:59
0

I got the same issues originally. At first, I think there is an error happened while running container, but nothing is wrong. Finally I figure it out, it's just the container exit automatically when its routine is done. So... just add a simple command in the ending line of entrypoint.sh:

tail -f /dev/null

then the container will keep running after started. Good luck! :)

firestoke
  • 1,051
  • 11
  • 23
-2

try running

docker stop CONTAINER_ID & docker rm -v CONTAINER_ID

Thanks