1

I'm a bit confused regarding asynchrony in FastAPI. Supposedly, I want to communicate with a database asynchronously. I have the following function that I will use later in some endpoint to write a record:

async def post(payload: SummaryPayloadSchema) -> int:
    summary = TextSummary(
        url=payload.url,
        summary="dummy summary",
    )
    await summary.save()
    return summary.id

But what I want is that while this is happening to be able get some record from the database as well. In other words, execute the previous coroutine asynchronously with another to read some data. How can I achieve that? Should I implement some technology like celery to execute both tasks asynchronously or can it only be done with FastAPI? And in the event that in the same endpoint you want to go asynchronously also obtaining other records, does the same apply? Thank you very much, I hope you can help me!

Chris
  • 18,724
  • 6
  • 46
  • 80
Diego L
  • 185
  • 1
  • 9
  • Use a database library that supports async. There's support in SQLAlchemy as well now, and you have async-versions for postgres and mysql, etc. – MatsLindh Nov 22 '22 at 21:13
  • So using these libraries, my application will automatically make the asynchronous queries? That is to say, if I make a request and then make the other one (suppose the first one has not been completed) will my app process the queries asynchronously, pausing with the awaits? I don't need anything else like creating tasks? Thanks a lot – Diego L Nov 22 '22 at 22:39
  • Please have a look at FastAPI's documentation on [Async SQL Databases](https://fastapi.tiangolo.com/advanced/async-sql-databases/). [This answer](https://stackoverflow.com/a/71517830/17865804) might also clarify things for you about `def` vs `async def` in FastAPI. – Chris Nov 23 '22 at 05:41
  • You'll have to explicitly call `await` yourself when making a async function call. If you want to scale it out to handle that automagically (i.e. processes just waiting around), scale the number of processes instead - this requires that you have a good idea of the load your application sees and how much is spent waiting on io compared to pure cpu load. – MatsLindh Nov 23 '22 at 08:59

0 Answers0