If all the fastapi endpoints are defined as async def
, then there will only be 1 thread that is running right? (assuming a single uvicorn worker).
Just wanted to confirm in such a setup, we will never hit the python's Global Interpreter Lock. If the same was to be done in a flask framework with multiple threads for the single gunicorn worker, then we would be facing the GIL which hinders the true parallelism between threads.
So basically, in the above fastapi, the parallelism is limited to 1 since there is only one thread. And to make use of all the cores, we would need to increase the number of workers either using gunicorn or uvicorn.
Is my understanding correct?