2

I have a small fastapi example here

# run.py
from functools import cache

import uvicorn
from fastapi import FastAPI, Depends

app = FastAPI()


class Dep:

    def __init__(self, inp):
        self.inp = inp

    @cache
    def __call__(self):
        print(self.inp * 100)
        return self.inp


@app.get("/a")
def a(v: str = Depends(Dep("a"))):
    return v


@app.get("/a/a")
def aa(v: str = Depends(Dep("a"))):
    return v


@app.get("/b")
def b(v: str = Depends(Dep("b"))):
    return v


@app.get("/b/b")
def bb(v: str = Depends(Dep("b"))):
    return v


def main():
    uvicorn.run(
        "run:app",
        host="0.0.0.0",
        reload=True,
        port=8000,
        workers=1
    )


if __name__ == "__main__":
    main()

I run python run.py and the application spins up.

What I expect is that:

the first time I hit /a or /a/a endpoints it shows logs to me and print 100 "a" for me. the next times no logging happens because of the @cache decorator

and

the first time I hit /b or /b/b endpoints it shows logs to me and print 100 "b"s for me. the next times no logging happens because of the @cache decorator

What happens

The first time I hit /b, it shows logs. the next times no log The first time I hit /b/b, it shows logs. the next times no log The first time I hit /a, it shows logs. the next times no log The first time I hit /a/a, it shows logs. the next times no log


the reason is that each time I am passing a value to the Dep class it is creating a new object and the call method is that of a new object each time. that is why the caching is not working properly.

and the caching is being done by fastapi dependency . That is why when I call an endpoint for the next times, I do not see new logs

But the question is how do I

  • pass arguments to the callable dependency (a class)
  • and also cache the value the function returns

Extra information

I tried to achieve the same using the approach below:

# run.py
from functools import cache

import uvicorn
from fastapi import FastAPI, Depends, Request

app = FastAPI()


@cache
def dep(inp):
    @cache
    def sub_dep():
        print(inp*100)
        return inp

    return sub_dep



@app.get("/a")
def a(v: str = Depends(dep("a"))):
    return v


@app.get("/a/a")
def aa(v: str = Depends(dep("a"))):
    return v


@app.get("/b")
def b(v: str = Depends(dep("b"))):
    return v


@app.get("/b/b")
def bb(v: str = Depends(dep("b"))):
    return v


def main():
    uvicorn.run(
        "run:app",
        host="0.0.0.0",
        reload=True,
        port=8000,
        workers=1
    )


if __name__ == "__main__":
    main()

and it working as expected above

Amin Ba
  • 1,603
  • 1
  • 13
  • 38
  • Does [this](https://stackoverflow.com/a/76322910/17865804) answer your question? – Chris Jul 27 '23 at 16:19
  • 1
    Why don't you just create the objects globally, and then use them? So `dep_a = Dep("A")` etc, and then using them like `Depends(dep_a)` etc? Sometimes the simple solution is the best – M.O. Jul 27 '23 at 23:15

1 Answers1

1

The first time I hit /a, it shows logs. The next times, no log. The first time I hit /a/a, it shows logs. The next times, no log.

You created four distinct objects at import time when you defined those functions, a pair of "a" functions and a pair of "b" functions. There's no relationship among those four objects, they are all separate from one another. So a GET that uses one has no effect on the other three.


Consider creating a cached a_result() helper function which both the /a and /a/a endpoints call. And similarly a b_result() helper shared by both /b and /b/b. Now there is a linkage across endpoints.

(If web clients can send an arbitrary arg to such a helper, then prefer lru_cache instead, so there's no memory leak.)

J_H
  • 17,926
  • 4
  • 24
  • 44
  • you are right. do you have any idea how to achieve what I am trying to achieve? can you see the updated question? – Amin Ba Jul 27 '23 at 16:08
  • your second solution a_result and b_result is not what I am looking for because it is not repeatable. I want to pass an argument to dynamically generate the dependency not create many dependency functions – Amin Ba Jul 27 '23 at 16:24
  • I was just trying to phrase it in terms of what you'd already implemented. You are free to create your own datastructure, perhaps based on a `dict`, or create your own decorator, to do arbitrary caching according to the needs of your use case. I _will_ observe that with a single `@lru_cache` helper function all it takes is to have it accept an initial `category` argument, such as `"a"` or `"b"`, and that will break apart /a/a caching from /b/b caching. The helper is free to accept additional args if that is convenient for your use case. We essentially probe cache with a key of `tuple(*args)`. – J_H Jul 27 '23 at 17:46