2

line 187, in get_new_connection connection = Database.connect(**conn_params) File "/usr/local/lib/python3.9/site-packages/psycopg2/init.py", line 122, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) django.db.utils.OperationalError: could not translate host name "db" to address: Temporary failure in name resolution

On my local machine everything works perfectly.

my docker file

FROM python:3.9-slim-buster

# RUN apt-get update && apt-get install -y libpq-dev \
#      gcc \
#      postgresql-client


# set work directory
WORKDIR /opt/app

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /opt/app/requirements.txt
RUN chmod +x /opt/app/requirements.txt
RUN pip install -r requirements.txt

# copy project
COPY . /opt/app/
RUN chmod +x /opt/app/docker-entrypoint.sh
EXPOSE 8000
ENTRYPOINT [ "/opt/app/docker-entrypoint.sh" ]

Here is my docker-compose.yml

version: '3.9'
services:
  db:
    image: postgres
    restart: always
    
    environment:
    - POSTGRES_NAME=postgres
    - POSTGRES_USER=postgres
    - POSTGRES_PASSWORD=postgres
    
    - POSTGRES_PASS=postgres
    
    volumes:
      - postgres_data:/var/postgres/data/
   
      
  
 

  app:
    restart: always
    build: 
      context: .
      dockerfile: Dockerfile
    command: python manage.py runserver 0.0.0.0:8000
    container_name: myproj
    volumes:
      - ./app/:/usr/src/app/
    ports:
      - "8000:8000"
    


    depends_on:
      - db
      
   
     
  
volumes:
  postgres_data:
     driver: local

my entrypoint

echo "Apply database migrations"
python manage.py makemigrations
python manage.py migrate
echo "Starting server"
python manage.py runserver 0.0.0.0:8000

exec "$@"

my database settings

'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'postgres',
        'USER': 'postgres',
        'PASSWORD': 'postgres',
        'HOST': 'db',
        'PORT': "5432"
    }

What i have tried

  1. pushing the db container separately but it fails to start

Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable

I have set port to 5432, tried running it on port 80 still fails to come online

My app containers fails to start as it cannot connect to db container

could not translate host name db to address: Temporary failure in name resolution

griffins
  • 7,079
  • 4
  • 29
  • 54
  • Have you read this answer: https://stackoverflow.com/questions/51750715/could-not-translate-host-name-db-to-address-using-postgres-docker-compose-and . In your case I geuss you have to start your python app in the `CMD` instead of the entrypoint. – Robert-Jan Kuyper Nov 29 '21 at 18:33
  • tried it, same error @Robert-JanKuyper – griffins Nov 29 '21 at 19:29
  • You can try ask the question at serverfault.com that is a better platform for such questions. – Robert-Jan Kuyper Nov 29 '21 at 20:25
  • 4
    You have multiple problems. 1) Cloud Run Managed does not provide container name resolution (at this time). When running locally Docker can connect the container networks together. This is not possible with Cloud Run Managed. 2) Cloud Run Managed does not support a container database. Storage is not persistent and requires an HTTP server interface. Review this link for details on what you can run. https://cloud.google.com/run/docs/reference/container-contract – John Hanley Nov 29 '21 at 21:12

2 Answers2

1

Building on John Hanley’s comment and container databases, it is also not recommended to host a database service (PostgreSQL, MySQL, etc) on services like Cloud Run. You can review related threads here and here. In summary, stateless containers which can scale up and down, in addition to not having persistent storage, would interfere with how a database properly functions. Since you are using GCP, you can opt for other services which are already offered, such as CloudSQL. Cloud SQL lets you create PostgreSQL instances, and can be integrated with Cloud Run hosted apps.

ErnestoC
  • 2,660
  • 1
  • 6
  • 19
0

From the comments by john Hanley this is what I did,

  1. created a cloud sql postgresql instance and setup my database (internal ip)

  2. for database connections

DATABASES = {'default': env.db()}
# # If the flag as been set, configure to use proxy
if os.getenv("USE_CLOUD_SQL_AUTH_PROXY", None):
    DATABASES["default"]["HOST"] = "cloudsql-proxy"
    DATABASES["default"]["PORT"] = 5432
  1. Run gcloud run deploy --source . to deploy current project (see below for docker file)

  2. In cloud run edit & deploy new version under container set your port, by default its 8080, under variables and connections (if you are using env) set your env variables. eg SECRET_KEY ,DATABASE_URL,GOOGLE_CLOUD_PROJECT,GS_BUCKET_NAME, under connections ,

eg .env file

SECRET_KEY=mysupersecretkey
GS_BUCKET_NAME=mybucket
GOOGLE_CLOUD_PROJECT=myprojid
DATABASE_URL=postgres://dbuser:<db pass>@//cloudsql/projid:us-central1:instancename/dbname

connect your instance to the postgresql instance you created above

under security you might want to change the service account, not necessary

  1. click deploy and your container will come to life.

here is my sample dockerfile



FROM python:3.9
WORKDIR /opt/app

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# install dependencies

RUN pip install --upgrade pip
COPY ./requirements.txt /opt/app/requirements.txt
RUN chmod +x /opt/app/requirements.txt
RUN pip install -r requirements.txt
RUN adduser --disabled-password --no-create-home django-user
# copy project
COPY . /opt/app/
RUN chmod +x /opt/app/docker-entrypoint.sh
EXPOSE 8000
ENTRYPOINT [ "/opt/app/docker-entrypoint.sh" ]
CMD exec


docker compose


version: '3.9'

services:
  app:
    container_name: myname
    build:
      context: .
    ports:
      - "8000:8000"
    volumes:
      - ./src/:/usr/src/app/
      - ./d_creds.json:/secrets/d_creds.json
    env_file:
      - ./.env
    restart: always

# The proxy will help us connect to remote CloudSQL instance locally.
# Make sure to turn off any VPNs for the proxy to work.
  cloudsqlproxy:
    container_name: cloudsql-proxy
    image: gcr.io/cloudsql-docker/gce-proxy:1.19.1
    volumes:
      - ./d_creds.json:/secrets/cloudsql/d_creds.json
    ports:
      - "127.0.0.1:5432:5432"
    command: /cloud_sql_proxy -instances="projid:zone:instance-name"=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/d_creds.json
    restart: always

for the entry point .sh

#!/bin/bash
python3 manage.py makemigrations
python3 manage.py migrate

exec "$@"

griffins
  • 7,079
  • 4
  • 29
  • 54