25

So I have figured out how to code a fastAPI and I am ready to deploy my script to heroku that I have worked with fastAPI (https://fastapi.tiangolo.com/) however the problem is that when I do a request to heroku it will just return:

<html>
  <head>
    <title>Internal Server Error</title>
  </head>
  <body>
    <h1><p>Internal Server Error</p></h1>

  </body>
</html>

Which means the script is on but I can't see the error and locally it works totally fine I would say.

I am not able to see any logs where the problem is however I would say my problem might be that I am not sure if my procfile is correct because I haven't edited it at all and I am quite new at this and I am here to ask how I am able to run my fastapi script in heroku?

What I know is that to be able to run the script, you have to use command uvicorn main:app --reload and it won't work if you do etc py main.py What am I doing wrong?

Thrillofit86
  • 599
  • 2
  • 7
  • 20

5 Answers5

32

The answer(s) are correct, but to use FastAPI in production running as WSGI with ASGI workers is a better choice here is why, i ran a benchmark for this question, so here is the results.

Gunicorn with Uvicorn workers

Requests per second:    8665.48 [#/sec] (mean)
Concurrency Level:      500
Time taken for tests:   0.577 seconds
Complete requests:      5000
Time per request:       57.700 [ms] (mean)

Pure Uvicorn

Requests per second:    3200.62 [#/sec] (mean)
Concurrency Level:      500
Time taken for tests:   1.562 seconds
Complete requests:      5000
Time per request:       156.220 [ms] (mean)

As you can see there is a huge difference in RPS(Request per second) and response time for each request.

Procfiles

Gunicorn with Uvicorn Workers

web: gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app

Pure uvicorn

web: uvicorn main:app --workers 4
Yagiz Degirmenci
  • 16,595
  • 7
  • 65
  • 85
  • 2
    How does this change if using Google Cloud Run? Would you just use a single Uvicorn worker and let google handle the scaling? – Zaffer Jul 21 '21 at 21:54
  • So, this means that Gunicorn does better jobs in worker managerments. – Gatsby Lee Jul 26 '21 at 07:10
  • i want to deploy fastapi for serving ml models in k8s on AWS EKS and stumbled on your benchmarks. what is the proper smallest machine configuration(cpu threads and ram, or better which aws ec2 instance) that can handle 4 `UvicornWorker` instances with `gunicorn` for maximum performance and optimal and maximum resource usage. – Naveen Reddy Marthala Jan 15 '22 at 06:35
24

I've tested your setup and after some checking (never used Heroku before) I'm guessing your uvicorn never binds to the appointed port (was the heroku-cli command heroku local working for you?)

Your Procfile could look like this;

web: uvicorn src.main:app --host=0.0.0.0 --port=${PORT:-5000}

This example assumes you have your source code within a subfolder named 'src' which has an empty __init__.py (indicating a Python module, you probably want to add src to the PYTHONPATH instead, see app.json) and main.py containing your fastapi app;

import socket
import sys

from fastapi import FastAPI

app = FastAPI()

hostname = socket.gethostname()

version = f"{sys.version_info.major}.{sys.version_info.minor}"


@app.get("/")
async def read_root():
    return {
        "name": "my-app",
        "host": hostname,
        "version": f"Hello world! From FastAPI running on Uvicorn. Using Python {version}"
    }

I have added this example to github which you can view on heroku (for now)

Hedde van der Heide
  • 21,841
  • 13
  • 71
  • 100
  • Ohhh you are right, I believe my issue was the Procfile actually. Because I had the rest correct but not the Procfile and seems to work as it should now! Thanks for the github too because that helps me out quite alot too! – Thrillofit86 Dec 19 '19 at 09:26
7

You can also configure your FastAPI to run on Gunicorn with uvicorn as worker process. Following is the command line you can keep in the Procfile used by Heroku to make your app up and running. The below command will spin up your app on 3 worker processes

web: gunicorn -w 3 -k uvicorn.workers.UvicornWorker main:app

For detailed step by step video you can visit this video tutorial that details how to deploy FastAPI on Heroku in just 6 minutes. or you can have a detailed walkthrough of how to create and deploy python based FastAPI on Heroku from this blog post.

navule
  • 3,212
  • 2
  • 36
  • 54
2

By comparision, than pure uvicorn, uvicornworkers process much faster and this makes FastAPI to execute processes much faster than Flask.

In production, it suggestible to use guincorn with Uvicornworkers

In heroku Procfile,

web: gunicorn -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:$PORT main:app

$PORT: instead of taking fixed port, it is preferred to use dynamic port.

Krish Na
  • 91
  • 1
  • 2
0

In my case I hadn't updated the wsgi app to be an asgi app.

when running:

gunicorn pm.wsgi --log-level=debug \
-k uvicorn.workers.UvicornWorker --log-file - --timeout 60

I got:

enter image description here

So, I added the WSGIMiddleware:

import os

from django.core.wsgi import get_wsgi_application
from dj_static import Cling
from uvicorn.middleware.wsgi import WSGIMiddleware
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "pm.settings")

# added the WSGIMiddleWare wrapper
application = WSGIMiddleware(Cling(get_wsgi_application()))
jmunsch
  • 22,771
  • 11
  • 93
  • 114