272

I've recently been playing around with Docker and QGIS and have installed a container following the instructions in this tutorial.

Everything works great, although I am unable to connect to a localhost postgres database that contains all my GIS data. I figure this is because my postgres database is not configured to accept remote connections and have been editing the postgres conf files to allow remote connections using the instructions in this article.

I'm still getting an error message when I try and connect to my database running QGIS in Docker: could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections to port 5433? The postgres server is running, and I've edited my pg_hba.conf file to allow connections from a range of IP addresses (172.17.0.0/32). I had previously queried the IP address of the docker container using docker ps and although the IP address changes, it has so far always been in the range 172.17.0.x

Any ideas why I can't connect to this database? Probably something very simple I imagine!

I'm running Ubuntu 14.04; Postgres 9.3

Cepr0
  • 28,144
  • 8
  • 75
  • 101
marty_c
  • 5,779
  • 5
  • 24
  • 27

15 Answers15

246

TL;DR

  1. Use 172.17.0.0/16 as IP address range, not 172.17.0.0/32.
  2. Don't use localhost to connect to the PostgreSQL database on your host, but the host's IP instead. To keep the container portable, start the container with the --add-host=database:<host-ip> flag and use database as hostname for connecting to PostgreSQL.
  3. Make sure PostgreSQL is configured to listen for connections on all IP addresses, not just on localhost. Look for the setting listen_addresses in PostgreSQL's configuration file, typically found in /etc/postgresql/9.3/main/postgresql.conf (credits to @DazmoNorton).

Long version

172.17.0.0/32 is not a range of IP addresses, but a single address (namly 172.17.0.0). No Docker container will ever get that address assigned, because it's the network address of the Docker bridge (docker0) interface.

When Docker starts, it will create a new bridge network interface, that you can easily see when calling ip a:

$ ip a
...
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever

As you can see, in my case, the docker0 interface has the IP address 172.17.42.1 with a netmask of /16 (or 255.255.0.0). This means that the network address is 172.17.0.0/16.

The IP address is randomly assigned, but without any additional configuration, it will always be in the 172.17.0.0/16 network. For each Docker container, a random address from that range will be assigned.

This means, if you want to grant access from all possible containers to your database, use 172.17.0.0/16.

Adam
  • 5
  • 1
  • 3
helmbert
  • 35,797
  • 13
  • 82
  • 95
  • 1
    hey thanks for your comments. I've changed my `pg_hba.conf` to the address you suggested, but still get the same connection error message after stopping and restarting the postgres service. I've added the line under my ipv4 connections - is there somewhere else I'm supposed to add the address you suggest? Alternatively in my QGIS app running in Docker do I need to change the postgres connection info? For example, if I'm connecting from within a docker container is the host still 'localhost'? – marty_c Jul 07 '15 at 09:00
  • Ah, that's an important point. No, `localhost` is not the host system inside your Docker container. Try connecting to the host system's public IP address. To keep the container portable, you can also start the container with the `--add-host=database:` and simply use `database` as hostname to connect to your PostgreSQL host from within the Docker container. – helmbert Jul 07 '15 at 09:22
  • 9
    I needed one more piece. I also had to edit `/etc/postgresql/9.3/main/postgresql.conf` and add my server's `eth0` IP address to `listen_addresses`. By default `listen_addresses` has postgres bind to `localhost` only. – Dzamo Norton Nov 07 '15 at 10:32
  • @DzamoNorton, thanks for the hint! I updated my answer accordingly. – helmbert Nov 07 '15 at 12:19
  • @helmbert `host-ip` is ip address of virtual machine or docker container? – Mr.D Jul 28 '17 at 18:47
  • @Mr.D with "host", I mean the machine (either virtual or physical) that the container is run on, not the container itself. – helmbert Jul 28 '17 at 19:49
  • The entire 'Long version' of this answer doesn't make sense. Fundamentally, `172.17.0.0/32` **is** a range of IP addresses - that's what the `/32` is - CIDR notation of an address range! While the 'TL;DR' might solve people's problems, it does so without any real understanding of IP Address ranges. – QA Collective Sep 02 '20 at 07:03
  • I made a stupid mistake that others may also encounter. I was trying to run postgres container with `--add-host:database:<172.17.0.1>`, while I should've started the other container (e.g. node server trying to connect to postgres) with `add-host` flag. – Hossein Dehnokhalaji Dec 06 '20 at 23:51
  • Point no 3 is very important. I took 2 days to resolve it and finally with this point my problem got resolved. Thanks... – Hitesh P Jan 12 '21 at 15:12
  • For people who see docker spawning containers in different subnets (`172.17`, `172.20`, etc. Put this in your `pg_hba.conf`: `host all all 172.0.0.0/8 md5` – GerardJP Nov 26 '21 at 11:47
  • To get this working I also had to configure the firewall to allow connections from up range, as connections are from another machine – Craig Webster Aug 19 '22 at 09:54
  • `172.17.0.0/16` didn't work for me, i had to do `172.17.0.%` instead i suppose the equivalent to `/16` would be `172.17.%.%` – Fuseteam Feb 07 '23 at 16:35
175

Simple Solution

The newest version of docker (18.03) offers a built in port forwarding solution. Inside your docker container simply have the db host set to host.docker.internal. This will be forwarded to the host the docker container is running on.

Documentation for this is here: https://docs.docker.com/docker-for-mac/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host

Chris
  • 1,956
  • 1
  • 10
  • 12
  • 15
    This is by far the best answer now! way easier and how it should be. – bjm88 Jul 07 '18 at 04:58
  • 7
    is `host.docker.internal` limited to macs only? – Dragas Nov 12 '18 at 11:22
  • @Dragas according to the docs it will "not work outside of the Mac", but the same DNS name is mentioned in the docs for Docker for Windows, so I think it's limited to "Docker for...". Either way it's development-only: you're not meant to ship a docker image that uses this. – Rhubarb Mar 02 '20 at 17:52
  • 17
    Only working with windows and mac. On my case ubuntu is not working – Azri Zakaria Apr 27 '20 at 04:59
  • This works for WSL2 Ubuntu. My specific case: WSL2 Docker container (Flask app) accessing a WSL2 installed postgresql. – WildGoose Feb 16 '22 at 03:26
  • 3
    The docs say " This is for development purpose and does not work in a production environment outside of Docker Desktop.". I will be connecting to a remote database in my production setup as well. What should I do in that case? – Mazhar Ali Sep 06 '22 at 18:28
  • As @MazharAli pointed out, this is slightly mislead as it only works locally/in development with docker desktop. In production, you should use the service/container name as the hostname. Read this for more: https://docs.docker.com/compose/networking/ – mattyb Jul 13 '23 at 20:59
76

Docker for Mac solution

17.06 onwards

Thanks to @Birchlabs' comment, now it is tons easier with this special Mac-only DNS name available:

docker run -e DB_PORT=5432 -e DB_HOST=docker.for.mac.host.internal

From 17.12.0-cd-mac46, docker.for.mac.host.internal should be used instead of docker.for.mac.localhost. See release note for details.

Older version

@helmbert's answer well explains the issue. But Docker for Mac does not expose the bridge network, so I had to do this trick to workaround the limitation:

$ sudo ifconfig lo0 alias 10.200.10.1/24

Open /usr/local/var/postgres/pg_hba.conf and add this line:

host    all             all             10.200.10.1/24            trust

Open /usr/local/var/postgres/postgresql.conf and edit change listen_addresses:

listen_addresses = '*'

Reload service and launch your container:

$ PGDATA=/usr/local/var/postgres pg_ctl reload
$ docker run -e DB_PORT=5432 -e DB_HOST=10.200.10.1 my_app 

What this workaround does is basically same with @helmbert's answer, but uses an IP address that is attached to lo0 instead of docker0 network interface.

baxang
  • 3,627
  • 1
  • 29
  • 27
  • 3
    Is this still current as of 4 April 2017? – Petrus Theron Apr 04 '17 at 09:37
  • I like this way which will not expose database. BTW, can I use this on CentOS? I got the error: alias: Unknown host when I tried to use the alias command you provide. – Tsung Wu Jun 14 '17 at 17:01
  • 7
    There is a better way on macOS, as of Docker 17.06.0-rc1-ce-mac13 (June 1st 2017). containers recognise the host `docker.for.mac.localhost`. this is the IP of your host machine. lookup its entry in the container's hosts database like so: `docker run alpine /bin/sh -c 'getent hosts docker.for.mac.localhost'` – Birchlabs Jul 28 '17 at 17:18
  • 7
    Seems like it changed to `host.docker.internal` since 18.03, other options are still available but deprecated ([Source](https://docs.docker.com/docker-for-mac/release-notes/#docker-community-edition-18030-ce-mac59-2018-03-26)). – gseva Apr 26 '18 at 20:12
52

Simple solution

Just add --network=host to docker run. That's all!

This way container will use the host's network, so localhost and 127.0.0.1 will point to the host (by default they point to a container). Example:

docker run -d --network=host \
  -e "DB_DBNAME=your_db" \
  -e "DB_PORT=5432" \
  -e "DB_USER=your_db_user" \
  -e "DB_PASS=your_db_password" \
  -e "DB_HOST=127.0.0.1" \
  --name foobar foo/bar
Max Malysh
  • 29,384
  • 19
  • 111
  • 115
28

The solution posted here does not work for me. Therefore, I am posting this answer to help someone facing similar issue.

Note: This solution works for Windows 10 as well, please check comment below.

OS: Ubuntu 18
PostgreSQL: 9.5 (Hosted on Ubuntu)
Docker: Server Application (which connects to PostgreSQL)

I am using docker-compose.yml to build application.

STEP 1: Please add host.docker.internal:<docker0 IP>

version: '3'
services:
  bank-server:
    ...
    depends_on:
      ....
    restart: on-failure
    ports:
      - 9090:9090
    extra_hosts:
      - "host.docker.internal:172.17.0.1"

To find IP of docker i.e. 172.17.0.1 (in my case) you can use:

$> ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

OR

$> ip a
1: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

STEP 2: In postgresql.conf, change listen_addresses to listen_addresses = '*'

STEP 3: In pg_hba.conf, add this line

host    all             all             0.0.0.0/0               md5

STEP 4: Now restart postgresql service using, sudo service postgresql restart

STEP 5: Please use host.docker.internal hostname to connect database from Server Application.
Ex: jdbc:postgresql://host.docker.internal:5432/bankDB

Enjoy!!

singhpradeep
  • 1,593
  • 1
  • 12
  • 11
  • 2
    `sudo nano /etc/postgresql//main/postgresql.conf` for those who want to edit postgres conf – Jasurbek Nabijonov Mar 04 '21 at 10:17
  • I have validated these steps in Windows 10 along with Docker. This is working well. Only things to note: use ipconfig in Step 1 and in Step 4, to restart postgresql services goto Control Panel -> Administrative Tools -> View Local Service and then search for Postgresql service and restart them. – singhpradeep May 01 '21 at 11:56
  • 1
    Thank you, step 2 and 3 solved my problem. In my case, I set the `method` for the new `pg_hba` entry to `password`. Also, you don't have to open access to all IP (`0.0.0.0/0`), but can instead use the docker bridge IP (use `ip -h -c a` to find that IP), in my case it was `172.17.0.1/16` – Leonard AB Jun 21 '21 at 14:57
  • If you followed this method with Ubuntu and get `could not translate host name "host.docker.internal" to address`, then just change the host to the local Docker IP `172.17.0.1` and it should work. – Florent Sep 28 '22 at 07:10
11

you can pass --network=host during docker run command to access localhost inside container.

Ex:

docker run --network=host docker-image-name:latest

In case you want to pass env variables with localhost use --env-file paramater to access environment variables inside container.

Ex:

docker run --network=host --env-file .env-file-name docker-image-name:latest

Note: pass the parameters before docker image name otherwise parameters will not work. (I faced this, so heads up!)

Shubham
  • 1,740
  • 1
  • 15
  • 17
7

for docker-compose you can try just add

network_mode: "host"

example :

version: '2'
services:
  feedx:
    build: web
    ports:
    - "127.0.0.1:8000:8000"
    network_mode: "host"

https://docs.docker.com/compose/compose-file/#network_mode

Sarath Ak
  • 7,903
  • 2
  • 47
  • 48
6

To set up something simple that allows a Postgresql connection from the docker container to my localhost I used this in postgresql.conf:

listen_addresses = '*'

And added this pg_hba.conf:

host    all             all             172.17.0.0/16           password

Then do a restart. My client from the docker container (which was at 172.17.0.2) could then connect to Postgresql running on my localhost using host:password, database, username and password.

Harlin
  • 1,059
  • 14
  • 18
5

Let me try explain what i did.

Postgresql

First of all I did the configuration needed to make sure my Postgres Database was accepting connections from outside.

open pg_hba.conf and add in the end the following line:

host    all             all             0.0.0.0/0               md5

open postgresql.conf and look for listen_addresses and modify there like this:

listen_addresses = '*'

Make sure the line above is not commented with a #

-> Restart your database

OBS: This is not the recommended configuration for a production environment

Next, I looked for my host’s ip. I was using localhosts ip 127.0.0.1, but the container doesn’t see it, so the Connection Refused message in question shows up when running the container. After a long search in web about this, I read that the container sees the internal ip from your local network (That one your router attributes to every device that connects to it, i’m not talking about the IP that gives you access to the internet). That said, i opened a terminal and did the following:

Look for local network ip

Open a terminal or CMD

(MacOS/Linux)

$ ifconfig

(Windows)

$ ipconfig

This command will show your network configuration information. And looks like this:

en4: 
    ether d0:37:45:da:1b:6e 
    inet6 fe80::188d:ddbe:9796:8411%en4 prefixlen 64 secured scopeid 0x7 
    inet 192.168.0.103 netmask 0xffffff00 broadcast 192.168.0.255
    nd6 options=201<PERFORMNUD,DAD>
    media: autoselect (100baseTX <full-duplex>)
    status: active

Look for the one that is active.

In my case, my local network ip was 192.168.0.103

With this done, I ran the container.

Docker

Run the container with the --add-host parameter, like this:

$ docker run --add-host=aNameForYourDataBaseHost:yourLocalNetWorkIp --name containerName -di -p HostsportToBind:containerPort imageNameOrId

In my case I did:

$ docker run --add-host=db:192.168.0.103 --name myCon -di -p 8000:8000 myImage

I’m using Django, so the 8000 port is the default.

Django Application

The configuration to access the database was:

In settings.py

DATABASES = {
    'default': {
            'ENGINE': 'django.db.backends.postgresql',
            'NAME': ‘myDataBaseName',
            'USER': ‘username',
            'PASSWORD': '123456',
            'HOST': '192.168.0.103',
            'PORT': 5432,
    }
}

References

About -p flag: Connect using network port mapping

About docker run: Docker run documentation

Interesting article: Docker Tip #35: Connect to a Database Running on Your Docker Host

Dharman
  • 30,962
  • 25
  • 85
  • 135
Igr Pn
  • 131
  • 2
  • 3
4

3 STEPS SOLUTION

1. Update docker-compose file

First, since the database is local we need to bind the host network to the container by adding the following configuration to the container service

services:
    ...
    my-web-app:
        ...
        extra_hosts:
          -  "host.docker.internal:host-gateway"
        ...
    ...

2. Update /etc/postgresql/12/main/pg_hba.conf

If the file doesn't exists under this dir use find / -name 'pg_hba.conf' to find it.

Add the following line under the comment tag # IPv4 local connections:

host    all             all             172.17.0.1/16           md5

3. Update /etc/postgresql/12/main/postgresql.conf

If the file doesn't exists under this dir use find / -name 'postgresql.conf' to find it.

find the following line (The line might be commented)

#listen_addresses = 'localhost'

And change it to the following line to be able to connect to postgres from localhost and 172.17.0.1

listen_addresses = 'localhost,172.17.0.1'

You can also change it to the following, But this is not recommended for production since it will expose the database to the world (meaning any IP address can connect to the database)

listen_addresses = '*'

Finally don't forget to:

  1. Restart the postgres service using sudo systemctl restart postgresql
  2. Update the connection string host to host.docker.internal
Zoe
  • 27,060
  • 21
  • 118
  • 148
Escapola
  • 117
  • 1
  • 5
3

In Ubuntu:

First You have to check that is the Docker Database port is Available in your system by following command -

sudo iptables -L -n

Sample OUTPUT:

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.2           tcp dpt:3306
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.3           tcp dpt:80
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.3           tcp dpt:22

Here 3306 is used as Docker Database Port on 172.17.0.2 IP, If this port is not available Run the following command -

sudo iptables -A INPUT -p tcp --dport 3306 -j ACCEPT

Now, You can easily access the Docker Database from your local system by following configuration

  host: 172.17.0.2 
  adapter: mysql
  database: DATABASE_NAME
  port: 3307
  username: DATABASE_USER
  password: DATABASE_PASSWORD
  encoding: utf8

In CentOS:

First You have to check that is the Docker Database port is Available in your firewall by following command -

sudo firewall-cmd --list-all

Sample OUTPUT:

  target: default
  icmp-block-inversion: no
  interfaces: eno79841677
  sources: 
  services: dhcpv6-client ssh
  **ports: 3307/tcp**
  protocols: 
  masquerade: no
  forward-ports: 
  sourceports: 
  icmp-blocks: 
  rich rules:

Here 3307 is used as Docker Database Port on 172.17.0.2 IP, If this port is not available Run the following command -

sudo firewall-cmd --zone=public --add-port=3307/tcp

In server, You can add the port permanently

sudo firewall-cmd --permanent --add-port=3307/tcp
sudo firewall-cmd --reload

Now, You can easily access the Docker Database from your local system by the above configuration.

Sanaulla
  • 1,329
  • 14
  • 13
  • 3
    I know this is old but *Now, You can easily access the Docker Database from your local system by following configuration* - you have this the wrong way around. He has a local database, and a docker app trying to connect to that local db not the other way around – Craicerjack Mar 26 '20 at 11:27
3

You can add multiple listening address for better security.

listen_addresses = 'localhost,172.17.0.1'

Adding listen_addresses = '*' isn't a good option, which is very dangerous and expose your postgresql database to the wild west.

c9s
  • 1,888
  • 19
  • 15
2

Just in case, above solutions don't work for anyone. Use below statement to connect from docker to host postgres (on mac):

psql --host docker.for.mac.host.internal -U postgres
Abdul Mannan
  • 1,072
  • 12
  • 19
  • I had to add `RUN apt-get install -y postgresql-client` to my Dockerfile and `psql --host host.docker.internal` worked for me (also on a mac) – Adrian Guerrero Apr 11 '23 at 19:10
0

One more thing needed for my setup was to add

172.17.0.1  localhost

to /etc/hosts

so that Docker would point to 172.17.0.1 as the DB hostname, and not rely on a changing outer ip to find the DB. Hope this helps someone else with this issue!

goto
  • 7,908
  • 10
  • 48
  • 58
Mikko P
  • 468
  • 6
  • 7
  • 20
    This is a bad solution. Localhost should typically point to 127.0.0.1. Changing it might have undesired consequences, even if in this particular case it works. – Alex Oct 22 '17 at 02:17
  • 2
    A better way is to set up a `database` host with `--add-host=database:172.17.0.1` when running the container. Then point your app to that host. This avoids hard-coding an IP address inside a container. – jaredsk Dec 01 '17 at 16:59
  • 2
    the `--add-host=database:172.17.0.1` is preferable – Luis Martins Apr 19 '18 at 19:50
-4

The another solution is service volume, You can define a service volume and mount host's PostgreSQL Data directory in that volume. Check out the given compose file for details.

version: '2'
services:
  db:   
    image: postgres:9.6.1
    volumes:
      - "/var/lib/postgresql/data:/var/lib/postgresql/data" 
    ports:
      - "5432:5432"

By doing this, another PostgreSQL service will run under container but uses same data directory which host PostgreSQL service is using.

Hrishi
  • 446
  • 3
  • 10