0

I have a Redis Stack running inside a podman container, that slows down for specific commands after just a couple of hours of uptime. (Redis v6.2.13, RediSearch v. 2.6.12, RedisJSON v. 2.4.7)

For example, FT.SEARCH idx LIMIT 0 0 that counts entries on a specific index runs slower and slower over time.

I have a .NET service that inserts data (150k keys, stored as JSON) every ~10min, and each key expires at a different time (TTL is different for each index).

The main consumer is a .net API, serving ~70 clients that retrieves data from Redis every ~15sec. I'm monitoring SLOWLOG and commands duration increase from ~10ms when the container start to ~300ms after 24h. This happens for just a couple of commands (especially FT.SEARCH idx LIMIT 0 0 )

Both .net services use Redis.OM. Everything goes back to normal when I'm restarting the container. What could some possible causes be? Thanks in advance.

I deactivated AOF and let only backups run every 1min or so. Nothing happens if I restart the sites in IIS (or recycle pools or anything). I ve turned on memory defragmetion but again nothing happens.

  • Are you using Redis Stack? – Guy Royse Jul 24 '23 at 17:00
  • @GuyRoyse yes. This is the image version : redis/redis-stack:6.2.6-v9 – alexandru-cernatescu Jul 25 '23 at 06:36
  • What does your Schema look like? – Guy Royse Jul 26 '23 at 14:43
  • Problem aside, why not using FT.INFO to get the number of documents in the index? – A. Guy Jul 28 '23 at 09:24
  • @GuyRoyse , what do you mean by Schema? – alexandru-cernatescu Jul 30 '23 at 23:49
  • 1
    @A.Guy , I just went along with out of the box methods provided by Redis.OM. As of now I also suspect I have some kind of memory leak. While the old variant (getting data from db and storing it in a memcache) of the app uses about 400mb of memory, the new version (using only redis) goes up to 11gb of memory. After using dotMemory (from jetBrains) I can see that the most of memory is used by objects from Redis.OM (especially JObjects, when de/serializing data). I ll check the memory leak and I hope I'll come back with a solution – alexandru-cernatescu Jul 30 '23 at 23:52
  • Each instance of the RedisCollection holds all the documents you've enumerated to track them for updates. If you are not updating your documents from the collection (which it sounds like you probably aren't) you can disable this feature by setting `saveState` to false in the constructor/factory method of the RedisCollection. Make sure you aren't holding onto a strong reference to the RedisCollection after they go out of scope (this will prevent GC), also if your data are quite large, it's possible that it's stuck in the Large Object Heap awaiting collection. – slorello Jul 31 '23 at 11:48
  • @slorello, I did change my code so every constructor has `saveState` false. The memory issue is solved, but redis still slows done. I'm gonna change the approach and I'll write here the results – alexandru-cernatescu Aug 01 '23 at 06:35
  • Is it possible that your memory is maxing out and it's switching over to swap? That could potentially slow things down. Maybe try disabling swap? https://docs.redis.com/latest/rs/installing-upgrading/configuring/linux-swap/ – slorello Aug 01 '23 at 20:51
  • @slorello, as I can see in my VM, swapping is enabled, but not a single byte is used. [after running top](https://imgtr.ee/image/Qmbcu) But now I can also see that redis uses a lot of cpu time (TIME+). Weird, because the container was up for only 7 hours. I'm thinking that maybe I have too many keys expiring at once. But I still don't have an explanation to why it slows down over time. After 7 hours, in slow log the median value is somewhere around 100ms. After the container is restarted, everything goes back to normal – alexandru-cernatescu Aug 01 '23 at 23:26
  • Do you have more than one CPU on there? .75 load isn’t terribly concerning if you are running on multi-core but if it’s a single core it could be a bunch of background processes that redis is doing that’s slowing it down? – slorello Aug 03 '23 at 01:10
  • Yep, there are more CPUs. I've ran today `LATENCY DOCTOR` and `MEMORY DOCTOR`. I've activated `activedefrag` and still runs the same. I'll try setting transparent huge pages to never and see how that goes. https://imgtr.ee/image/L0Dbv – alexandru-cernatescu Aug 03 '23 at 11:20

0 Answers0