6

Here is my trivial fastapi app:

from datetime import datetime
import asyncio

import uvicorn
from fastapi import FastAPI


app = FastAPI()

@app.get("/delayed")
async def get_delayed():
    started = datetime.now()
    print(f"Starting at: {started}")
    await asyncio.sleep(10)
    ended = datetime.now()
    print(f"Ending at: {ended}")
    return {"started": f"{started}", "ended": f"{ended}"}

if __name__ == "__main__":
    uvicorn.run("fastapitest.main:app", host="0.0.0.0", port=8000, reload=True, workers=2)

When I make 2 consecutive calls to it, the code in the function for the second one doesn't start executing until the first request finishes, producing an output like:

Starting at: 2021-09-17 14:52:40.317915
Ending at: 2021-09-17 14:52:50.321557
INFO:     127.0.0.1:58539 - "GET /delayed HTTP/1.1" 200 OK
Starting at: 2021-09-17 14:52:50.328359
Ending at: 2021-09-17 14:53:00.333032
INFO:     127.0.0.1:58539 - "GET /delayed HTTP/1.1" 200 OK

Given that the function is marked async and I am awaiting the sleep, I would expect a different output, like:

Starting at: ...
Starting at: ...
Ending at: ...
INFO:     127.0.0.1:58539 - "GET /delayed HTTP/1.1" 200 OK
Ending at: ...
INFO:     127.0.0.1:58539 - "GET /delayed HTTP/1.1" 200 OK

[for the calls I just opened up 2 browser tabs at localhost:8000/delayed ]

What am I missing?

jsbueno
  • 99,910
  • 10
  • 151
  • 209
Levi
  • 7,313
  • 2
  • 32
  • 44

1 Answers1

6

It works in parallel as expected - it is just a browser thing: chrome on detecting the same endpoint being requested in different tabs, will wait for the first to be completly resolved to check if the result can be cached.

If instead you place 3 http requests from different processes in the shell, the results are as expected:

content-length: 77
content-type: application/json
date: Fri, 17 Sep 2021 19:51:39 GMT
server: uvicorn

{
    "ended": "2021-09-17 16:51:49.956629",
    "started": "2021-09-17 16:51:39.955487"
}


HTTP/1.1 200 OK
content-length: 77
content-type: application/json
date: Fri, 17 Sep 2021 19:51:39 GMT
server: uvicorn

{
    "ended": "2021-09-17 16:51:49.961173",
    "started": "2021-09-17 16:51:39.960850"
}


HTTP/1.1 200 OK
content-length: 77
content-type: application/json
date: Fri, 17 Sep 2021 19:51:39 GMT
server: uvicorn

{
    "ended": "2021-09-17 16:51:49.964156",
    "started": "2021-09-17 16:51:39.963510"
}


Adding a random, even if unused, query parameter on the URL for each browser tab will all cancel the trying-to-cache behavior.

related question: Chrome stalls when making multiple requests to same resource?

jsbueno
  • 99,910
  • 10
  • 151
  • 209
  • I saw after I posted that it works if I make the requests from different browsers, and thought it was something on the server side, making some kind of hash from headers and query params and preventing multiple identical requests at the same time. Seems that I was wrong. Thanks for the explanation – Levi Sep 17 '21 at 20:45
  • I have a question, Do FastAPI/uvicorn create seperate python process for each request and handle them parallely or all the request are handled in the same python process, which means we have to be very specific of our global variables. Pls help, ive been trying to find solution of this for quite sometime. – Mayank Pant Jan 03 '22 at 09:30
  • It is the same process, and sam e thread - otherwise there would be no point in using async, to start with. You should not use global variables conventionally, but in async projects you _can't_ at all!! Ensure all data your functions need is passed by parameters, or try to use "contextvars" (which are messy to use) - https://docs.python.org/3/library/contextvars.html – jsbueno Jan 03 '22 at 13:16