In my application I need to "simulate" a HTTP timeout. Simply put, in this scenario:
client -> myapp -> server
client
makes a HTTP POST connection to myapp
which forwards it to server
. However, server
does not respond due to network issues or similar problems. I am stuck with an open TCP session from client
which I'll need to drop.
My application uses web.py, nginx and uwsgi.
I cannot return a custom HTTP error such as 418 I am a teapot
- it has to be a connection timeout to mirror server
's behaviour as closely as possible.
One hack-y solution could be (I guess) to just time.wait()
until client
disconnects but this would use a uwsgi thread and I have a feeling it could lead to resource starvation because a server
timeout is likely to happen for other connections. Another approach is pointed out here however this solution implies returning something to client
, which is not my case.
So my question is: is there an elegant way to kill a uwsgi
worker programmatically from python code?
So far I've found
set_user_harakiri(N)
which I could combine with atime.sleep(N+1)
. However in this scenario uwsgi detects the harakiri and tries re-spawning the worker.worker_id()
but I'm not sure how to handle it - I can't find much documentation on using it- A suggestion to use
connection_fd()
as explained here disconnect()
which does not seem to do anything, as the code continues and returns toclient
suspend()
does suspend the instance, but NGINX returns the boilerplate error page
Any other idea?
UPDATE
Turns out it's more complicated than that. If I just close the socket or disconnect from uwsgi the nginx web server detects a 'server error' and returns a 500 boilerplate error page. And, I do not know how to tell nginx to stop being so useful.