1

I am deploying a python project on google cloud run using the following Dockerfile. When no container is started, the cold container startup take around 35 seconds.

FROM python:3.9-slim as build

ENV DEBIAN_FRONTEND noninteractive

RUN apt update -y && \
    apt upgrade -y && \
    apt install -y curl software-properties-common && \
    add-apt-repository ppa:deadsnakes/ppa && \
    rm -rf /var/lib/apt/lists/* && \
    curl https://bootstrap.pypa.io/get-pip.py | python3.9

WORKDIR /rembg

COPY requirements.txt .
RUN python3.9 -m pip install -r requirements.txt

COPY . .
RUN python3.9 -m pip install .

RUN mkdir -p /home/.u2net/

RUN mv u2netp.onnx /home/.u2net/u2netp.onnx
RUN mv u2net.onnx /home/.u2net/u2net.onnx
RUN mv u2net_human_seg.onnx /home/.u2net/u2net_human_seg.onnx
RUN mv u2net_cloth_seg.onnx /home/.u2net/u2net_cloth_seg.onnx

EXPOSE 5000
ENTRYPOINT ["rembg"]
CMD ["s"]

With another nodeJs project I am using FROM ubuntu:20.04 as build and the startup latency is 4 seconds. I tried to do the same python project but it didn't helped. Any idea about the problem?

user567
  • 3,712
  • 9
  • 47
  • 80
  • Your post includes details on how you built the container. Your post does not include details on the application you are running in the container. – John Hanley Apr 17 '23 at 17:13
  • I thought this is a docker problem. I updated the question by adding the project I am deploying – user567 Apr 17 '23 at 18:09
  • post your requirements.txt to your question, but as I posted as answer, this is how Python works on slow start, it's nothing wrong with Cloud Run, neither your container, it's a python "import" thing to be slow – Pentium10 Apr 17 '23 at 18:27

2 Answers2

1

Without looking at the requirements.txt file, 99% the problems will be from there, from some heavy Python packages.

In Python environment on Cloud Run the cold start is a problem because GIL is not effective. More about this here.

Marginally you can make it better with Python 3.11, and if you have slim packages use those package-name@slim, similar to ML packages when no GPU needed @cpu annotation will pull different package. We've been applyied this improvements and our cold start reduced from 19 sec to 16 sec. So as I see, it's just marginally. You need to make sure that minimum instances are 1, to have your API always up, and then you don't have a frequent cold starts, only occasionally.

Pentium10
  • 204,586
  • 122
  • 423
  • 502
0

In addition to @Pentium10's answer, you can also use CPU boost (currently in preview/beta) wherein it provides additional CPU during container instance startup time and for 800 seconds after the instance has started. The actual boost will depend on your CPU limit settings (e.g., CPU limit: 0-1, boosted CPU: 2).

You may refer to the following documentation on preventing cold starts and optimizing performance:  

Robert G
  • 1,583
  • 3
  • 13