1

I was following the idea given here https://stackoverflow.com/a/65699375/4314952 to set up a shared cache that can be used by multiple workers (uvicorn).

However, I noticed that this does not work since a cache is created for each worker individually. How can I avoid this?

If I run the code below, I see multiple times init cache on the console and also if I (after once executed set_id) call many times /getid depending on which worker processes the request, I often get 0 as a result.

Please note that in my use case I must run uvicorn programatically (https://www.uvicorn.org/#running-programmatically).

# main.py
import os
from aiocache import Cache
from fastapi import FastAPI, status
import time

app = FastAPI()
cache = Cache(namespace="main")


class Meta:
    def __init__(self):
        print("init cache")            
        pass

    async def get_id(self):
        return await cache.get("id", default=0)

    async def set_id(self):
        value = time.time() * 1000
        await cache.set("id", value)


meta = Meta()


@app.post("/setid")
async def set_id():
    await meta.set_id()
    return status.HTTP_200_OK


@app.get("/getid")
async def get_id():
    return await meta.get_id()


if __name__ == "__main__":
    num_workers = 8
    uvicorn.run("main:app", host="0.0.0.0", port=8000, workers=num_workers)
 
d4rty
  • 3,970
  • 5
  • 34
  • 73
  • 1
    `cache = Cache(namespace="main")` uses `Cache.MEMORY` as default, which belongs to only one process. For sharing data between processes you need to use `Cache.REDIS` or `Cache.MEMCACHED`. – alex_noname May 19 '21 at 10:14

0 Answers0