23

Suddenly when I deployed some new containers with docker-compose the internal hostname resolution didn't work.
When I tried to ping one container from the other using the service name from the docker-compose.yaml file I got ping: bad address 'myhostname' I checked that the /etc/resolv.conf was correct and it was using 127.0.0.11 When I tried to manually resolve my hostname with either nslookup myhostname. or nslookup myhostname.docker.internal I got error

nslookup: write to '127.0.0.11': Connection refused
;; connection timed out; no servers could be reached

Okay so the issue is that the docker DNS server has stopped working. All already started containers still function, but any new ones started has this issue. I am running Docker version 19.03.6-ce, build 369ce74

I could of course just restart docker to see if it solves it, but I am also keen on understanding why this issue happened and how to avoid it in the future.
I have a lot of containers started on the server and a total of 25 docker networks currently. Any ideas on what can be done to troubleshoot? Any known issues that could explain this? The docker-compose.yaml file I use has worked before and no changes has been done to it.

Edit: No DNS names at all can be resolved. 127.0.0.11 refuses all connections. I can ping any external IP addresses, as well as the IP of other containers on the same docker network. It is only the 127.0.0.11 DNS server that is not working. 127.0.0.11 still replies to ping from within the container.

Johnathan
  • 737
  • 2
  • 7
  • 21

5 Answers5

16

I have the same problem. I am using the pihole/pihole docker container as the sole dns server on my network. Docker containers on the same host as the pihole server could not resolve domain names.

I resolved the issue based on "hmario"'s response to this forum post.

In brief, modify the pihole docker-compose.yml from:

---
version: '3.7'
services:
  unbound:
    image: mvance/unbound-rpi:1.13.0
    hostname: unbound
    restart: unless-stopped
    ports:
      - 53:53/udp
      - 53:53/tcp
    volumes: [...]

to

---
version: '3.7'
services:
  unbound:
    image: mvance/unbound-rpi:1.13.0
    hostname: unbound
    restart: unless-stopped
    ports:
      - 192.168.1.30:53:53/udp
      - 192.168.1.30:53:53/tcp
    volumes: [...]

Where 192.168.1.30 is a ip address of the docker host.

citizentwelve
  • 161
  • 1
  • 2
  • finally a solution that works! any time i try to run adguardhome & unbound on a server and point my routers DNS to it, sibling containers on the same server can't ping the outside world. been looking for a solution for ages, thank you! – djscrew Jul 10 '22 at 17:49
  • 1
    this must be should be written somewhere in the docs. thanks man. – Anan Raddad Aug 12 '22 at 04:30
  • The same problem also occurs with Technitium DNS as DNS-server running inside a docker container. And your solution also works there! I searched quite a long time for a solution considering it is a super simple workaround/fix. Thank you very much for your answer. – 123 Aug 15 '23 at 13:42
9

Make sure you're using a custom bridge network, NOT the default one. As per the Docker docs (https://docs.docker.com/network/bridge/), the default bridge network does not allow automatic DNS resolution:

Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.

Daniel Lo Nigro
  • 3,346
  • 27
  • 29
  • This was my issue on a Traefik setup that had external for some servers and a default network for all containers in the swarm. I changed the default network to a non-default name and they all could dns each other. Previously it was only those on the traefik network. – MrYutz Jul 30 '21 at 16:44
7

I'm having exactly the same problem. According to the comment here I could reproduce the setting without docker-compose, only using docker:

docker network create alpine_net
docker run -it --network alpine_net  alpine /bin/sh -c "cat /etc/resolv.conf; ping -c 4 www.google.com"

stopping docker (systemctl stop docker) and enabling debug output it gives

> dockerd --debug 
[...]
 [resolver] read from DNS server failed, read udp 172.19.0.2:40868->192.168.177.1:53: i/o timeout 
[...]

where 192.168.177.1 is my local network ip for the host that docker runs on and where also pi-hole as dns server is running and working for all of my systems.

I played around with fixing iptables configuration. but even switching them off completely and opening everything did not help.

The solution I found, without fully understanding the root case, was to move the dns to another server. I installed dnsmasq on a second system with ip 192.168.177.2 that nothing else than forwarding all dns queries back to my pi-hole server on 192.168.177.1

starting docker on 192.168.177.1 again with dns configured to use 192.168.177.2 everything was working again

with this in one terminal

dockerd --debug --dns 192.168.177.2

and the command from above in another it worked again.

> docker run -it --network alpine_net  alpine /bin/sh -c "cat /etc/resolv.conf; ping -c 4 www.google.com"
search mydomain.local
nameserver 127.0.0.11
options ndots:0
PING www.google.com (172.217.23.4): 56 data bytes
64 bytes from 172.217.23.4: seq=0 ttl=118 time=8.201 ms

--- www.google.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 8.201/8.201/8.201 ms

So moving the the dns server to another host and adding "dns" : ["192.168.177.2"] to my /etc/docker/daemon.json fixed it for me

Maybe someone else can help me to explain the root cause behind the problem with running the dns server on the same host as docker.

CKolumbus
  • 119
  • 2
  • 4
1

I had same problem, the problem was host machine's hostname. I have checked hostnamectl result and it was ok but problem solved with stupid reboot. before reboot result of cat /etc/hosts was like this:

 # The following lines are desirable for IPv4 capable hosts
127.0.0.1 localhost HostnameSetupByISP
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4

# The following lines are desirable for IPv6 capable hosts
::1 localhost HostnameSetupByISP
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

and after reboot, I've got this result:

# The following lines are desirable for IPv4 capable hosts
127.0.0.1 hostnameIHaveSetuped  HostnameSetupByISP
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4

# The following lines are desirable for IPv6 capable hosts
::1 hostnameIHaveSetuped HostnameSetupByISP
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
Bheid
  • 306
  • 3
  • 11
  • A simple docker container restart and Gravity can update now just fine as internal to the docker container DNS is working again. Thank you @Bheid – Iskren P. Feb 25 '23 at 23:54
1

First, make sure your container is connected to a custom bridged network. I suppose by default in a custom network DNS request inside the container will be sent to 127.0.0.11#53 and forwarded to the DNS server of the host machine.

Second, check iptables -L to see if there are docker-related rules. If there is not, probably that's because iptables are restarted/reset. You'll need to restart docker demon to re-add the rules to make DNS request forwarding working.

Yuefeng Li
  • 315
  • 3
  • 7