-1

Previously I used Flask in combination with Gunicorn to develop and deploy APIs In Redhat OpenShift. Multiple sources claim that FastAPI is faster, so I thought I will make two simple API's in both Flask and in FastAPI to be able to compare them.

The code written in Flask is:

from flask import Flask, request, json, Response, send_file

app = Flask(__name__)


def find_sums_single():
    def cpu_bound(number):
        return sum(i * i for i in range(number))
  
    numbers = [5_000_000 + x for x in range(5)]
    for number in numbers:
        cpu_bound(number)


@app.route("/loadtest/", methods=['GET'])
def loadtest():
    find_sums_single()
    return {"Message:": "Succesfully performed loadtest"}

and started with:

PORT=${1:-5757}
gunicorn --preload --timeout=60 -b '0.0.0.0:'+$PORT --workers=1 wsgi:app

The code written in FastAPI is:

from fastapi import Request, FastAPI, status

app = FastAPI(debug=False)

def find_sums_single():
    def cpu_bound(number):
        return sum(i * i for i in range(number))
  
    numbers = [5_000_000 + x for x in range(5)]
    for number in numbers:
        cpu_bound(number)
        
  
@app.get("/loadtest/", status_code=200)
def loadtest():
    find_sums_single()
    return {"message": "Succesfully performed loadtest"}

and started with:

uvicorn api:app --host 0.0.0.0 --port 5757

or

gunicorn api:app --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:5757

I always use a single worker, because I like to let OpenShift to handle scaling up/down.

Using Lokust (with 2min timing) I got following results:

Lokust results

Here, FastAPI does not look faster at all. Did I do something wrong?

davidism
  • 121,510
  • 29
  • 395
  • 339
Niels
  • 11
  • 1

2 Answers2

3

Well... What you are doing is mainly CPU bound work.

A web application will usually be more I/O bound, and then compute the CPU bound work elsewhere.

That said, the benchmark is not wrong. It's just a misconception to say that a web framework is slower than the other based on CPU bound work.

As a note, you can increase the time on which uvicorn runs installing uvloop and httptools.

Disclaimer: I'm a uvicorn maintainer.

Marcelo Trylesinski
  • 634
  • 1
  • 5
  • 13
  • I do know FastAPI is faster when the application is more I/O bound. But it is stated that even in worse case (like here) it should perform slightly better. So I thought maybe I did something wrong. Thank you for yourr response and the tip regarding uvloop and httptools; I will look into that! – Niels Jun 29 '22 at 13:08
0

Should have an async before the route def in fastapi to take advantage of asynchronous requests which is one of the main reasons fastapi is much faster in production (but only when multiple requests are made at the same time)

  • 2
    Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – CheTesta Oct 06 '22 at 14:10
  • Each request is still an awaited thread. The simple answer is that if you're calling async code from your route handler, you should use an async route, but for more details, see https://fastapi.tiangolo.com/async/#very-technical-details – Cliff Jan 05 '23 at 22:16