0

Why does the Python requests.get() function take too long to be cut off (or fail) if the target request URL is a remote URL (i.e. not localhost) and it can't be reached?

If the target request URL is just localhost and it can't be reached, it cuts off or fails very fast so the issue only occurs if the target request URL is a remote URL.

How could I make it quicker?

Jude Maranga
  • 865
  • 2
  • 9
  • 27
  • You need to change the default request timeout: https://stackoverflow.com/questions/21965484/timeout-for-python-requests-get-entire-response – Ilya Oct 23 '22 at 09:14

1 Answers1

0

Yes. A timeout would do the trick, but you'll need to make sure the remote server will execute the request (if it's supposed to). Meaning: don't make the request timeout 1s if the request would take 5s to execute. Requests Docs for Timeouts

requests.get('https://github.com/', timeout=0.001)

Alternatively, you could (on localhost) point to the server you're after. And have some kind of variable that points to the "correct" server based on if the Python server is local or remote?

Harrison
  • 1,654
  • 6
  • 11
  • 19
  • If I change the timeout to a lower timeout, wouldn't that be problematic if the request was able to reach the server and the process is normally taking a longer time? For example, if I set the timeout to 1s and the request did go through the server but the server takes 5s to finish the process and return a response, would this be an issue? – Jude Maranga Oct 23 '22 at 10:40
  • It would be an issue, which is why you should exercise caution with the timeout parameter. It would be better to point the request to the correct URL. Out of interest, why is the URL not accessible on localhost? – Harrison Oct 23 '22 at 18:37