I have a sinatra app, which is performing considerably slower than I would like. My first suspicion is that it is my own code that is the bottleneck, so I extracted it to a standalone benchmarking script.
THREADS = 100
ITERATIONS = 1
def make_calls
ITERATIONS.times do
# ... my stuff here
end
end
1.upto(THREADS) do |n|
Benchmark.bm do |bm|
threads = []
n.times do
threads << Thread.new { make_calls }
end
bm.report("#{n} threads:") { threads.each { |t| t.value } }
end
end
Where make_calls calls my own code. I'm pleased to say that by the time we have reached 100 threads, the cumulative times of make_calls in all threads is 0.6 seconds, which is fast enough for my purposes. The reason I am wrapping the make_calls method in threads above is because my own code uses threads (java native threads via a java.concurrent.FixedThreadPool(500)) /ExecutorService and I wanted to make sure that this was behaving nicely in an environment that potentially uses other threading models. A single iteration in a single thread runs in about 0.02 seconds once the jruby has warmed up.
So the above is good, but when I add this to a sinatra web server with the following:
require 'sinatra'
get '/' do
# ... my stuff here
end
The response time on a request to this endpoint is approx .5 seconds - Increase the number of concurrent requests and the response time goes up in a linear fashion. I've used both jetty-rackup and trinidad to try this, using jruby 1.7 on both linux and solaris.
I have tried to optimise the trinidad instance to no avail (max/min runtimes etc). The best performance we have seen is by running either server in threadsafe! mode, and both servers show comparative performance in this mode.
Can anyone explain to me where the time is being consumed or how to improve this setup?