40

How does Django handles multiple requests in production environment?

Suppose we have one of web server: Apache, Nginx, gunicorn etc.
So do those servers for any request from web browser start new process to serve that request?
If it's true, doesn't it cause huge overhead?
If it's not true, then how the same view (let it be def hello(request) view bound to /hello url) serve several requests at the same time.

I've seen answers for question "... handle multiple users"

Graham Dumpleton
  • 57,726
  • 6
  • 119
  • 134
Alex-droid AD
  • 635
  • 1
  • 6
  • 14
  • Are you in a unix environment with access to command line so you can watch apache logs, ps, etc? If so, you can run this command to see the requests and http traffic processes: `$ watch "ps aux | grep http "`. Requests are served through separate processes as you will see. It only causes a huge load if the work is CPU, memory, or IO intensive. – JacobIRR Mar 10 '18 at 20:27
  • Yes, but the question nevertheless is theoretical. IO could be quite intensive because of using DB for example – Alex-droid AD Mar 10 '18 at 20:36
  • 4
    For how web servers in Python work, you might find it interesting to read this series of posts. https://ruslanspivak.com/lsbaws-part1/ – Graham Dumpleton Mar 10 '18 at 23:05
  • This one is Apache/mod_wsgi specific, but also may be interesting to read this as goes into use of processes and threads in WSGI servers. http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html – Graham Dumpleton Mar 10 '18 at 23:07
  • See https://en.wikipedia.org/wiki/Fork_(system_call) – wim Mar 10 '18 at 23:14

1 Answers1

28

Django handles just a request at a time.

If you use the very old CGI interface (between your web-server and Django), a new Django process is started at every request. But I think nobody do this.

There are many additional interfaces on web servers, not do load at every request a new server side program. FastCGI is one of these (agnostic to programming language), some programs have own module directly implemented in web server (e.g. mod-php) [python had this in the past]. But now Django and in general python, prefer WSGI interface.

So webserver open one or more programs (Django app) in parallel. The web server will send request to a free process (or it queue requests, this is handled by web server). How many processes, and for how long, it depend on web server configuration.

The databases supported by django supports concurrency, so there is no problem on having different processes handling the same app. [SQLite is different, but you should use this, just for developing/testing Django]. By writing to some log files [usually multiline], one could see some problems (parallel process which write at the same time, the same file).

NOTE: in such explanation I use "web server" in a broad sense. This includes gunicorn, mod-wsgi etc.

Giacomo Catenazzi
  • 8,519
  • 2
  • 24
  • 32
  • 2
    So, if webserver start ("open") only **one** Django app ("program"), then until current request wouldn't be served another one request will be suspend? – Alex-droid AD Mar 10 '18 at 21:20
  • 1
    "The databases supported by django supports concurrency". Does it mean that when some request hit DBMS and is waiting for response from that DBMS, django can handle next http request? If it so, why we have tools like asyncio? – Alex-droid AD Mar 10 '18 at 21:28
  • @Alex-droidAD: asyncio is a python thing, so used for python programs. Kernel and so webservers and databases implement similar things (but more complex, because low level: `pool`, and ascync calls). So in case of django, it is the webserver that handle such "asyncio" like stuffs. – Giacomo Catenazzi Mar 11 '18 at 06:56
  • for the first question, yes. the webserver handles the connection to external world, and it will be confused, if it get outputs for several streams. Most of webservers (and gunicorn) are very well designed to handle parallel request with low overhead, so offloading such difficult task from server-side-applications (e.g. Django) – Giacomo Catenazzi Mar 11 '18 at 08:53