2

I'm trying to host my laravel application in GCP cloud run and everything works just fine but for some reason whenever I run a POST request with lots of data (100+ rows of data - 64Mb) saving to the database, it always throw an error. I'm using nginx with docker by the way. Please see the details below.

ERROR

Cloud Run Logs

The request has been terminated because it has reached the maximum request timeout.

nginx.conf

worker_processes  1;

events {
    worker_connections  1024;
}
http {
    include       mime.types;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen LISTEN_PORT default_server;
        server_name _;
        root /app/public;
        index index.php;
        charset utf-8;
        location / {
            try_files $uri $uri/ /index.php?$query_string;
        }

        location = /favicon.ico { access_log off; log_not_found off; }
        location = /robots.txt  { access_log off; log_not_found off; }
        access_log /dev/stdout;
        error_log /dev/stderr;
        sendfile off;
        client_max_body_size 100m;

        location ~ \.php$ {
            fastcgi_split_path_info ^(.+\.php)(/.+)$;
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_index index.php;
            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            fastcgi_intercept_errors off;
            fastcgi_buffer_size 32k;
            fastcgi_buffers 8 32k;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }
    #include /etc/nginx/sites-enabled/*;
}

daemon off;

Dockerfile

FROM php:8.0-fpm-alpine

RUN apk add --no-cache nginx wget

RUN docker-php-ext-install mysqli pdo pdo_mysql

RUN mkdir -p /run/nginx

COPY docker/nginx.conf /etc/nginx/nginx.conf

RUN mkdir -p /app
COPY . /app

RUN sh -c "wget http://getcomposer.org/composer.phar && chmod a+x composer.phar && mv composer.phar /usr/local/bin/composer"
RUN cd /app && \
    /usr/local/bin/composer install --no-dev

RUN chown -R www-data: /app

CMD sh /app/docker/startup.sh

Laravel version:

v9


Please let me know if you need some data that is not indicated yet on my post.

jccampanero
  • 50,989
  • 3
  • 20
  • 49
Blues Clues
  • 1,694
  • 3
  • 31
  • 72
  • What is ** lots of data**? Specify an actual value instead of a description. What is the error that your app is reporting? Check the Cloud Run logs and post that detail as well. – John Hanley Jul 05 '22 at 08:06
  • @JohnHanley It says `The request has been terminated because it has reached the maximum request timeout.` but I set it to max which is `3600` (which equivalent to 1hour) – Blues Clues Jul 05 '22 at 08:28
  • Use a [`queue`](https://laravel.com/docs/9.x/queues) for potentially long running processes. – Peppermintology Jul 05 '22 at 08:47
  • @Peppermintology Good suggestion, but I tried my application in VM and it works pretty well. In addition, 100 rows is normal and must be fast when executing. It's just weird that in cloud run, it throws the error – Blues Clues Jul 05 '22 at 09:14
  • There will be factors to consider when trying this in a production vs development environment, for example differences in network latency and stability. – Peppermintology Jul 05 '22 at 09:25
  • @Jie can you refer to the [link] [ https://www.codemag.com/Article/2111071/Beginner%E2%80%99s-Guide-to-Deploying-PHP-Laravel-on-the-Google-Cloud-Platform ] and [thread] [ https://stackoverflow.com/a/70083898/15774176 ] is it helpful? – Divyani Yadav Jul 05 '22 at 13:31
  • @DivyaniYadav I tried these links but still getting the same result, also some of the answers are not related to cloud run – Blues Clues Jul 15 '22 at 06:17
  • Increasing both `keepalive_timeout` and `send_timeout` on nginx might solves the issue. See: http://nginx.org/en/docs/http/ngx_http_core_module.html – ofirule Jul 15 '22 at 20:11
  • have you check php ini configuration ? check this values in php.ini file ; Production Value: 60 (60 seconds) ; http://php.net/max-input-time max_input_time = 60 ; Maximum allowed size for uploaded files. ; http://php.net/upload-max-filesize upload_max_filesize = 2M ; http://php.net/max-execution-time ; Note: This directive is hardcoded to 0 for the CLI SAPI max_execution_time = 30 – S_B Jul 16 '22 at 08:25

3 Answers3

1

Increase max_execution_time in php configuration. By default it is only 30 seconds. Make 30 minutes for example:

max_execution_time = 1800

Increase timeouts of nginx:

http{
   ...
   proxy_read_timeout 1800;
   proxy_connect_timeout 1800;
   proxy_send_timeout 1800;
   send_timeout 1800;
   keepalive_timeout 1800;
   ...
}

Another idea for investigation is to give more resources to your cloud instance (more CPUs, more RAM) in order to process your request faster and avoid timeout. But eventually it should be increased.

Slava Kuravsky
  • 2,702
  • 10
  • 16
1

I think the issue has nothing to do with php, laravel, or nginx, but with Cloud Run.

As you can see in the Google Cloud documentation when they describe HTTP 504: Gateway timeout errors:

HTTP 504
The request has been terminated because it has reached the maximum request 
timeout.

If your service is processing long requests, you can increase the request timeout. If your service doesn't return a response within the time specified, the request ends and the service returns an HTTP 504 error, as documented in the container runtime contract.

As suggested in the docs, please, try increasing the request timeout until your application can process the huge POST data you mentioned: it is set by default to 5 minutes, but can be extended up to 60 minutes.

As described in the docs, you can set it through the Google Cloud console and the gcloud CLI; directly, or by modifying the service YAML configuration.

jccampanero
  • 50,989
  • 3
  • 20
  • 49
  • Sorry @Jie, I just realized the comment in which you answered to John Harley. Do you already set the Cloud Run service timeout to one hour then? Please, could you [verify it](https://cloud.google.com/run/docs/configuring/request-timeout#viewing)? The error is very clear and it is typically related to Cloud Run. Perhaps some configuration is cached for any reason - it shouldn't by the way, but just in case, did you try recreating the service from scratch with the suggested timeout increase? – jccampanero Jul 17 '22 at 15:05
  • In addition there is a [hard limit of 32 Mb](https://cloud.google.com/run/quotas#cloud_run_limits) as maximum HTTP/1 request size, but it would be probably unrelated to your issue. – jccampanero Jul 17 '22 at 15:56
  • the request timeout is already set to `3600` which is 60mins. But I still get the same result – Blues Clues Jul 18 '22 at 18:05
  • 1
    Thank you very much for the feedback @Jie. I see. The strange thing is that, as you can see in the documentation, it is a typical error reported by Cloud Run, so it makes perfect sense. As suggested in the comment, did you try to create the service from scratch in order to discard any problem with the current one? Did you verify that the timeout was set to `60` minutes as suggested as well? Consider also review the suggested timeout related changes in other answers, although I think that if the same container runs successfully in a VM everything should be right configured in php and nginx – jccampanero Jul 18 '22 at 18:11
  • yup I tried to create a new one but still getting the same issue (sorry for not indicating that to my post). just thinking, maybe there's something wrong with my nginx config? – Blues Clues Jul 18 '22 at 18:23
  • Sorry to hear that. I am not an expert in nginx configuration, it is a deep topic indeed, but your config looks fine to me. Are you using nginx and with the same configuration in the VM setup you mentioned? If not, try running the full containerized stack in your VM or your machine, and see if it works. It will help you to isolate the see if the error is related to Cloud Run or not. Any way, did you try to increase the different timeouts as proposed in other answers? I think especially `send_timeout` could be relevant. Finally, are there any other suspicious error traces in your logs? – jccampanero Jul 18 '22 at 18:38
  • 1
    A certain thing, on the other hand, is that the error page that is being displayed is the one from your nginx server, so perhaps the error that appear in the logs is masquerading another one. Please, in addition to configuring `send_timeout` try tweaking the [fastcgi specific timeouts](http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_read_timeout), especially `fastcgi_send_timeout` and `fastcgi_read_timeout`, it may be helpful as well. Please, consider read [this](https://stackoverflow.com/questions/561946/how-do-i-prevent-a-gateway-timeout-with-fastcgi-on-nginx) – jccampanero Jul 18 '22 at 21:33
  • or [this other](https://stackoverflow.com/questions/54737851/how-to-increase-timeout-for-nginx) SO related questions. I hope it helps. – jccampanero Jul 18 '22 at 21:33
  • @Jie In the end, were you able to fix the problem? – jccampanero Jul 21 '22 at 15:50
  • your comments helped a lot. Will give you feedback as soon as possible bro – Blues Clues Jul 21 '22 at 19:34
  • I am very happy to hear that the comments were helpful. Yes, please, I will be glad to help if possible. – jccampanero Jul 21 '22 at 21:29
1

Default Nginx timeout is 60s. Since you have mentioned the data is 64mb. It will take time to process that request in your backend and send back the response within 60s.

So either you could try to increase the nginx timeout by adding the below block in your nginx.conf file

http{
   ...
   proxy_read_timeout 300;
   proxy_connect_timeout 300;
   proxy_send_timeout 300;
   keepalive_timeout 3000;
   ...
}

Or better way would be, dont process the data immediately, push the data to a message queue and send the response instantly.let the background workers handle the process with data. I dont know much about laravel. In django we can use rabbitmq and celery/ pika.

To get the result for the request with huge data you can poll the server at regular interval or setup a websocket connection

LiquidDeath
  • 1,768
  • 1
  • 7
  • 18
  • great, but I cannot use queue in cloud run since I cannot install supervisor to run my workers – Blues Clues Jul 21 '22 at 19:28
  • Okay. But i could see you can setup rabbitmq queue in GCP. I'm doing my deployment using container orchestration. For the same use case what i did was running the workers without any supervisor since the workers are doing just one predefined task and use Prometheus to monitor them. I have setup 40 workers and it works just fine. In AWS you could use lambda. I'm unaware of the equivalent in GCP – LiquidDeath Jul 21 '22 at 19:36