176

I have a cron job setup on one server to run a backup script in PHP that is hosted on another server.

The command I've been using is

curl -sS http://www.example.com/backup.php

Lately I've been getting this error when the Cron runs:

curl: (52) Empty reply from server

If I go to the link directly in my browser the script runs fine and I get my little backup ZIP file.

Piper
  • 1,266
  • 3
  • 15
  • 26
Paul Sheldrake
  • 7,505
  • 10
  • 38
  • 50
  • This really has nothing to do with PHP as curl doesn't care what the outputting file processor is. – Kevin Peno Dec 17 '09 at 21:03
  • 1
    Could your backup script be running so long that it causes the `curl` to timeout? Have you tried increasing the default curl waits to connect with `--connect-timeout ` and for the whole operation to take with `--max-time `? – Yzmir Ramirez May 25 '12 at 05:52
  • @YzmirRamirez curl timeout error code is 28. Src: https://ec.haxx.se/usingcurl-timeouts.html – Luckylooke Mar 20 '19 at 15:03
  • 2
    With Docker + Uvicorn (FastAPI) it helped me to set --host 0.0.0.0 – TechWisdom May 02 '20 at 00:11

25 Answers25

138

This can happen if curl is asked to do plain HTTP on a server that does HTTPS.

Example:

$ curl http://google.com:443
curl: (52) Empty reply from server
Benoit Duffez
  • 11,839
  • 12
  • 77
  • 125
  • 20
    This was the situation in my case. `curl localhost:8443` gave me the empty reply error. `curl -k https://localhost:8443` served the page properly. – lowly_junior_sysadmin Nov 01 '17 at 23:34
  • 2
    I've just stumbled over this and I missed the missing s completely. I wonder why there isn't a more clear error (even like connection refused: it would make more sense). – ShinTakezou Oct 04 '18 at 10:19
  • 1
    In my case a `VPN` service cutted off the response. – Alex Szücs Jan 11 '22 at 16:38
  • In my case, this happened because the value of server-side `WriteTimeout` was too short. Increasing the value from `1` second to `5` seconds solved the problem. – ynn Mar 20 '23 at 11:32
57

Curl gives this error when there is no reply from a server, since it is an error for HTTP not to respond anything to a request.

I suspect the problem you have is that there is some piece of network infrastructure, like a firewall or a proxy, between you and the host in question. Getting this to work, therefore, will require you to discuss the issue with the people responsible for that hardware.

Steve Knight
  • 923
  • 7
  • 12
  • 34
    This is likely the wrong approach to troubleshooting. Empty reply means it was able to connect to the IP/port, but the server returned nothing in the reply. It's likely an issue on the service itself. – Robert Christian Apr 15 '15 at 02:15
  • 5
    Well, not quite. When this happened to me it was because my authenticating proxy wasn't connecting through to the remote host. So in actual fact there was no issue on the service itself. – Steve Knight Jun 22 '16 at 10:33
  • 1
    In my case I have proxy, which is disabled for loopback interface where the server is running. – rbaleksandar Feb 20 '18 at 10:36
  • In my case a NGINX web cache server with no hard drive space left. – Alien Life Form May 02 '18 at 05:53
  • In my case, it was my organisation's VPN that was severing HTTP requests that last more than 60 seconds. Turning it off fixed the problem. – stwr667 Jan 28 '22 at 07:09
  • this is absolutely wrong answer - curl **did connect** to the server but had `nothing` in the reply which is opposite of not been able to connect (firewall proxy etc). – Boppity Bop Apr 21 '22 at 13:46
15

It can happen when server does not respond due to 100% CPU or Memory utilization.

I got this error when I was trying to access sonarqube API and the server was not responding due to full memory utilization

Jayaprakash
  • 311
  • 2
  • 8
13

In my case it was server redirection; curl -L solved my problem.

Matt Seymour
  • 8,880
  • 7
  • 60
  • 101
Guillermo Prandi
  • 1,537
  • 1
  • 16
  • 31
11

Another common reason for an empty reply is timeout. Check all the hops from where the cron job is running from to your PHP/target server. There's probably a device/server/nginx/LB/proxy somewhere along the line that terminates the request earlier than you expected, resulting in an empty response.

garbagecollector
  • 3,731
  • 5
  • 32
  • 44
7

In case of SSL connections this may be caused by issue in older versions of nginx server that segfault during curl and Safari requests. This bug was fixed around version 1.10 of nginx but there is still a lot of older versions of nginx on the internet.

For nginx admins: adding ssl_session_cache shared:SSL:1m; to http block should solve the problem.

I'm aware that OP was asking for non-SSL case but since this is the top page in goole for "empty reply from server" issue, I'm leaving the SSL answer here as I was one of many that was banging my head against the wall with this issue.

SiliconMind
  • 2,185
  • 4
  • 25
  • 49
4

To turn @TechWisdom's comment into an answer: with Docker + Uvicorn (FastAPI) you need to bind to the Docker host with the Uvicorn command line option --host 0.0.0.0 (The default is 127.0.0.1).

Tomerikoo
  • 18,379
  • 16
  • 47
  • 61
Noumenon
  • 5,099
  • 4
  • 53
  • 73
4

In my case this was caused by a PHP APC problem. First place to look would be the Apache error logs (if you are using Apache).

Tomerikoo
  • 18,379
  • 16
  • 47
  • 61
Andrew McCombe
  • 1,603
  • 1
  • 12
  • 13
  • Can you explain a bit more? How can this be caused by APC? I am not even running this inside PHP, I'm just using command line. – The Onin Apr 12 '17 at 02:31
  • This was so long ago, I can't remember the reason for APC being the cause of this issue. Sorry I can't help. – Andrew McCombe Apr 12 '17 at 13:57
3

this error also can happen if the server is processing the data. It usually happens to me when I do post some files to REST API websites that have many entries and take long for the records creation and return

Thiago Conrado
  • 726
  • 8
  • 15
3

I ran into this error sporadically and could not understand. Googling did not help.

I finally found out. I run a couple of docker containers, among them NGINX and Apache. The command at hand addresses a specific container, running Apache. As it turned out, I also have a cron job doing some heavy lifting at times running on the same container. Depending on the load this cron job puts on this container, it was not able to answer my command in a timely manner, resulting in error 52 empty reply from server or even 502 Bad Gateway.

I discovered and verified this by plain curl when I noticed that the process I investigated took less than 2 seconds and all of a sudden I got a 52 error and then a 502 error and then again less than 2 seconds - so it was definitely not my code which was unchanged. Using ps aux within the container I saw the other process running and understood.

Actually, I was bothered by 502 Bad Gateway from NGINX with long running jobs and could not fix it with the appropriate parameters, so I finally gave up and switched these things to Apache. That's why I was puzzled even more about these errors.

The remedy is simple. I just fired up some more instances of this container with docker service scale and that was it. docker load balances on its own.


Well, there is more to this as another example showed. This time I did some repetitious jobs.

I found out that after some time I ran out of memory used by PHP which cannot be reclaimed, so the process died.

Why? Having more than a dozen containers on a 8GB RAM machine, I initially thought it would be a good idea to limit RAM usage on PHP containers to 50MB.

Stupid! I forgot about it, but swarmpit gave me a hint. I call ini_set("memory_limit",-1); in the constructor of my class, but that only went as far as those 50MB.

So I removed those restrictions from my compose file. Now those containers may use up to 8GB. The process runs with Apache for hours now and it looks like the problem is solved, memory usage rising to well beyond 100MB.


Another caveat: To easily get and read debug messages, I started said process in Opera under Windows. That is fine with errors appearing soon.

However, if the last one is cared for, quite naturally the process runs and runs and memory usage in the browser builds up, eventually making my local machine unusable. So if that happens, kill this tab and the process keeps running fine.

kklepper
  • 763
  • 8
  • 13
1

The question is very open in my opinion. I will share exactly what I was trying to achieve and how I resolved the problem

Context

Java app running on ports 8999 and 8990. App is running as a docker-compose stack in an AWS EC2 Ubuntu 20 server.

I added a Network Load Balancer that must receive traffic on TLS port 443 under an AWS ACM cert. The forwarding mechanism must be as it follows

  1. port TLS 443 from NLB --> TCP port 8990 in the EC2 instance
  2. port TCP 22 from NLB --> TCP port 8999 in the EC2 instance

I was getting the error from the curl CLI

Solution

I realized that there is a private AWS VPC that handles the traffic. The AWS EC2 instance runs in a private subnet. The AWS EC2 instance had security group rules that were blocking the traffic. The solution was to open all traffic 0.0.0.0/0 to the EC2 instance on ports 8990 and 8999

within AWS Load balancers i saw my target groups with healthy checks and after some client reboots and clearing of DNS cache I was able to access the application through HTTPS.

Dharman
  • 30,962
  • 25
  • 85
  • 135
Andre Leon Rangel
  • 1,659
  • 1
  • 16
  • 28
0

Try this -> Instead of going through cURL, try pinging the site you’re trying to reach with Telnet. The response that your connection attempt returns will be exactly what cURL sees when it tries to connect (but which it unhelpfully obfuscates from you). Now, depending on what what you see here, you might draw one of several conclusions:

You’re attempting to connect to a website that’s a name-based virtual host, meaning it cannot be reached via IP address. Something’s gone wrong with the hostname—you may have mistyped something. Note that using GET instead of POST for parameters will give you a more concrete answer.

The issue may also be tied to the 100-continue header. Try running curl_getinfo($ch, CURLINFO_HTTP_CODE), and check the result.

RamenChef
  • 5,557
  • 11
  • 31
  • 43
Felix
  • 17
  • 1
  • Interesting point. I was actually able to get the HTML as response with `telnet hostname` and `GET ` – The Onin Apr 12 '17 at 02:35
0

you can try this curl -sS "http://www.example.com/backup.php" by putting your URL into "" that worked for me I don't know the exact reason but I suppose that putting the url into "" completes the request to the server or just completes the header request.

omar
  • 17
  • 2
0

I've had this problem before. Figured out I had another application using the same port (3000).

Easy way to find this out:

In the terminal, type netstat -a -p TCP -n | grep 3000 (substitute the port you're using for the '3000'). If there is more than one listening, something else is already occupying that port. You should stop that process or change the port for your new process.

ginna
  • 1,723
  • 1
  • 16
  • 16
  • 2
    This is a very specific case that you mentioned. This is not, in general, why curl returns you this response. Turns out that this issue needs to be dealt at the server side and not the client side. [This](https://curl.haxx.se/mail/archive-2012-05/0051.html) is where I understood. – Aashish Chaubey Nov 26 '18 at 06:50
0

In my case (curl 7.47.0), it is because I set the header content-length on curl command manually with a value which is calculated by postman (I used postman to generate curl command parameters and copy them to shell). After I delete header content-length, it works normally.

YouCL
  • 93
  • 1
  • 3
  • 11
0

My case was due to SSL certificate expiration

Druvan
  • 49
  • 2
0

I faced this while making a request to my flask application which was using gunicorn for concurrency. The reason was that I set a timeout which was smaller than the time required for the server to process and give response to a single request. The following bash script shows how to set timeout in gunicorn.

#!/bin/bash

# Start Gunicorn processes
echo Starting Gunicorn.
exec $PWD/venv/bin/gunicorn server:app --worker-class sync --timeout 100000 --keep-alive 60 --error-logfile error.log --capture-output --log-level debug \
    --bind 0.0.0.0:9999

According to Gunicorn | timeout:

Workers silent for more than this many seconds are killed and restarted. Value is a positive number or 0. Setting it to 0 has the effect of infinite timeouts by disabling timeouts for all workers entirely. Generally, the default of thirty seconds should suffice. Only set this noticeably higher if you’re sure of the repercussions for sync workers. For the non sync workers it just means that the worker process is still communicating and is not tied to the length of time required to handle a single request.

hafiz031
  • 2,236
  • 3
  • 26
  • 48
0

Today I am facing the same issue, in my case it is a proxy problem. reproduce it like this:

➜  retire git:(master) ✗ proxy
➜  retire git:(master) ✗ curl -X GET -H 'Content-Type: application/json' -N http://localhost:11014/ai/stream/chat/test\?question\=1
curl: (52) Empty reply from server
➜  retire git:(master) ✗ unset all_proxy
➜  retire git:(master) ✗ curl -X GET -H 'Content-Type: application/json' -N http://localhost:11014/ai/stream/chat/test\?question\=1
data:{}

data:{}

data:{}

hope this could make some clue for solve the problem.

Dolphin
  • 29,069
  • 61
  • 260
  • 539
-1

In my case I was using uwsgi, added the property http-timeout for more then 60 seconds but it was not working because of some extra space and config file was not getting loaded properly.

Ankit Adlakha
  • 1,436
  • 12
  • 15
-1

Allow(whitelist) your host ip in port 80 of "http://www.example.com". I mean if you are using AWS.

zawhtut
  • 8,335
  • 5
  • 52
  • 76
-1

I faced the same problem I wanted to make a curl post request to ingest a file to an elastic search node and it returns that error. All I did was changing the heap value in the config/jvm.options make it bigger as possible and it worked, to add that I was working with the https not http and that causes no problem at all. If something else goes wrong try rebooting the machine you are working with. You might face some problems with the firewall so you have to allow for example the 9300 port or whatever port you are in need. Those are the problems I faced during my practice on Elastic Stack till now.

jesus
  • 1
-1

In my case, using a setup hosting my API on GCP, it was a criminally simple case of removing a trailing slash from my API call... like so:

Socket Error: https://example.com/customers/

Working: https://example.com/customers

A bit embarrassing but hopefully I can save someone the 20 minutes I was pulling my hair out, as this could be a cause.

dir
  • 661
  • 3
  • 13
-2

In my case, Reverse Proxy Nginx was on the way, the server directly sent the 500 code, via NGINX curl: (52) Empty reply from server

cypma5
  • 1
  • As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Oct 26 '21 at 07:29
  • This does not really answer the question. If you have a different question, you can ask it by clicking [Ask Question](https://stackoverflow.com/questions/ask). To get notified when this question gets new answers, you can [follow this question](https://meta.stackexchange.com/q/345661). Once you have enough [reputation](https://stackoverflow.com/help/whats-reputation), you can also [add a bounty](https://stackoverflow.com/help/privileges/set-bounties) to draw more attention to this question. - [From Review](/review/late-answers/30178623) – slauth Oct 26 '21 at 09:27
-3

It happens when you are trying to access secure Website like Https.

I hope you missed 's'

Try Changing URL to curl -sS -u "username:password" https://www.example.com/backup.php

-3

vim /etc/elasticsearch/elasticsearch.yml

and make xpack.security.enabled: true to xpack.security.enabled: false