I am currently thinking about improving the performance of Rails.cache.write
when using dalli to write items to the memcachier cloud.
The stack, as it relates to caching, is currently:
heroku, memcachier heroku addon, dalli 2.6.4, rails 3.0.19
I am using newrelic for performance monitoring.
I am currently fetching "active students" for a given logged in user, represented by a BusinessUser
instance, when its active_students
method is called from a controller handling a request that requires a list of "active students":
class BusinessUser < ActiveRecord::Base
...
def active_students
Rails.cache.fetch("/studio/#{self.id}/students") do
customer_users.active_by_name
end
end
...
end
After looking at newrelic, I've basically narrowed down one big performance hit for the app in setting key values on memcachier. It takes an average of 225ms every time. Further, it looks like setting memcache key values blocks the main thread and eventually disrupts the request queue. Obviously this is undesirable, especially when the whole point of the caching strategy is to reduce performance bottlenecks.
In addition, I've benchmarked the cache storage with plain dalli, and Rails.cache.write for 1000 cache sets of the same value:
heroku run console -a {app-name-redacted}
irb(main):001:0> require 'dalli'
=> false
irb(main):002:0> cache = Dalli::Client.new(ENV["MEMCACHIER_SERVERS"].split(","),
irb(main):003:1* {:username => ENV["MEMCACHIER_USERNAME"],
irb(main):004:2* :password => ENV["MEMCACHIER_PASSWORD"],
irb(main):005:2* :failover => true,
irb(main):006:2* :socket_timeout => 1.5,
irb(main):007:2* :socket_failure_delay => 0.2
irb(main):008:2> })
=> #<Dalli::Client:0x00000006686ce8 @servers=["server-redacted:11211"], @options={:username=>"username-redacted", :password=>"password-redacted", :failover=>true, :socket_timeout=>1.5, :socket_failure_delay=>0.2}, @ring=nil>
irb(main):009:0> require 'benchmark'
=> false
irb(main):010:0> n = 1000
=> 1000
irb(main):011:0> Benchmark.bm do |x|
irb(main):012:1* x.report { n.times do ; cache.set("foo", "bar") ; end }
irb(main):013:1> x.report { n.times do ; Rails.cache.write("foo", "bar") ; end }
irb(main):014:1> end
user system total real
Dalli::Server#connect server-redacted:11211
Dalli/SASL authenticating as username-redacted
Dalli/SASL: username-redacted
0.090000 0.050000 0.140000 ( 2.066113)
Dalli::Server#connect server-redacted:11211
Dalli/SASL authenticating as username-redacted
Dalli/SASL: username-redacted
0.100000 0.070000 0.170000 ( 2.108364)
With plain dalli cache.set
, we are using 2.066113s to write 1000 entries into the cache, for an average cache.set
time of 2.06ms.
With Rails.cache.write
, we are using 2.108364s to write 1000 entries into the cache, for an average Rails.cache.write
time of 2.11ms.
⇒ It seems like the problem is not with memcachier, but simply with the amount of data that we are attempting to store.
According to the docs for the #fetch method, it looks like it would not be the way I want to go, if I want to throw cache sets into a separate thread or a worker, because I can't split out the write
from the read
- and self-evidently, I don't want to be reading asynchronously.
Is it possible to reduce the bottleneck by throwing Rails.cache.write
into a worker, when setting key values? Or, more generally, is there a better pattern to do this, so that I am not blocking the main thread every time I want to perform a Rails.cache.write
?