2

We're running sidekiq workers that utilize neography to do batch operations.

Our batch array holds up to 400 operations before flushing (we've tried lower numbers as well).

We hit the R14 memory error on heroku and things grind to almost a halt, so we suspect a memory leak of some sort ( I have already check for bloat ). However, we've been unable to figure out where it is or how to prevent it.

We've tried to use all debugging memory gem as ruby-prof, [...] without any results or clues, read object counts via ObjectSpace and even trying to debug line by line and launch the process without a background job but just via the rails c and the following command to monitor the memory usage top -pid `ps auw | grep -i 'rails c' | head -n 1 | awk '{print $2}'` -stats RSIZE.

I tried to update our ruby version to the latest (2.1.0) but no changes.

Any idea are welcome to help us to make our workers happier !

Quentin Rousseau
  • 330
  • 5
  • 13

1 Answers1

0

Neo4j internally uses a lot of caching which might consume a serious amount of memory. You can try to switch off Neo4j's object cache by setting cache_type=none, see http://docs.neo4j.org/chunked/stable/configuration-caches.html.

Stefan Armbruster
  • 39,465
  • 6
  • 87
  • 97