1

So maybe this is not a good practice but at this point my web application is coexisting with its' API in the same app/ under the same server. I am wondering if there is any way to speed up how server process requests in this case?

For example, when I am sending a request to the API under the same server, I use:

require 'rest-client'

tasks = RestClient.get 'localhost:8000/tasks', 
{
  content_type: :json,
  accept: :json, 
 "X-User-Email" => "blahblah@gmail.com", 
 "X-User-Token" => "blahblah"
}

However, sometimes it takes a ridiculously long time to get the result. Or it results in an 'Timeout reading data from server' error. I am wondering if it is because the server is sending requests to itself AND receiving requests from itself at the same time.I have tried opening another server on port 8000, send request to port 3000 server and it is so much faster. Is rails that bad at multithreading?

$ rails s -p 8000 -P 42323

Also, if I refreshed the port 8000 server page first and send the request, then refreshing the port 3000 server page would also be much faster. Is it because the response has been cached by Rack?

P.S I apologize if I used different terms incorrectly.

EDIT:

I have tried debugging from the API and the application controller. It seems like the most delay happens right before actually hitting the API controller. Server output if sending request from port 8000 to port 8000 API:

Started GET "/" for 127.0.0.1 at 2016-11-11 17:37:14 +0800
  User Load (0.7ms)  SELECT  "users".* FROM "users" WHERE "users"."id" = $1  ORDER BY "users"."id" ASC LIMIT 1  [["id", 1234567]]
Processing by TasksApplicationController#index as HTML


Started GET "/tasks" for 127.0.0.1 at 2016-11-11 17:38:14 +0800
Processing by APIController#index as JSON

So before it start loading the route for the API, ONE MINUTE has passed!

So if I tried having two local servers running and send request from port 3000 to port 8000 server, this is the output of the much faster version:

Started GET "/tasks" for 127.0.0.1 at 2016-11-11 17:42:25 +0800
Processing by APIController#index as JSON

The difference is that this part of the output is gone, and thus the 1-minute delay is gone.

Started GET "/" for 127.0.0.1 at 2016-11-11 17:37:14 +0800
  User Load (0.7ms)  SELECT  "users".* FROM "users" WHERE "users"."id" = $1  ORDER BY "users"."id" ASC LIMIT 1  [["id", 1234567]]
Processing by TasksApplicationController#index as HTML

Why is it that if I send request to port 8000 server FROM 8000 server, I have this additional part that causes so much time delay?

whales
  • 787
  • 1
  • 12
  • 24

1 Answers1

0

The issue is quite likely due to the fact that you are running Webrick in development mode. Although Webrick is a multi-threaded server and Rails is multi-threaded, when in development mode Rails uses Rake::Lock middleware which prevents concurrent requests.

To enable Webrick to be fully multi-threaded in development mode you will need to monkey-patch the Middleware and remove the Rake::Lock by creating the following initialiser:

config/initializers/multithreaded_webrick.rb:

# Remove Rack::Lock so WEBrick can be fully multi-threaded.
require 'rails/commands/server'

class Rails::Server
  def middleware
    middlewares = []
    middlewares << [Rails::Rack::Debugger] if options[:debugger]
    middlewares << [::Rack::ContentLength]

    Hash.new middlewares
  end
end

Edit I found the original source for this information:

How rails resolve multi-requests at the same time?

In addition to the initialiser you may also need to set:

config.cache_classes = true
config.eager_load = true
Community
  • 1
  • 1
David
  • 3,510
  • 3
  • 21
  • 39
  • I am running puma as the the server for rails though. I thought that puma is built for concurrency? – whales Nov 11 '16 at 08:24
  • hmm yes puma is supposed to be concurrent for development. I don't have time to actually check that out - need to rush to work xD. But as far as your question is concerned - rails is concurrent and yes it can run multi-threaded in development without any issues - likely it's just a configuration issue somewhere. – David Nov 11 '16 at 08:44
  • might be worth outputting debug information from the API side to try and isolate if the issue is with it hitting the API in the first place or whether something within the API call is causing the lockup. – David Nov 11 '16 at 08:47
  • @whales are you sure you are running puma as the server? From what I can see from your question you aren't - according to their page you need to do `rails s Puma` – David Nov 11 '16 at 09:36
  • Maybe that is optional? When I just ran “rails s”, the server starting output is something like: "Booting Puma" etc. Also even if I ran "rails s Puma", the problem still exists. – whales Nov 11 '16 at 09:56
  • ah I see, I don't have any experience with using Puma ... refer to my earlier comment about debugging ... it may help to confirm where the issue lies. Maybe it would be worth testing it with Webrick too (with the applied changes) to see if you get the same behaviour or whether this is specific to Puma. – David Nov 11 '16 at 10:03
  • thanks @henners66. I have added to the question the debugging output. This question might just have to remain a mystery for now :( – whales Nov 13 '16 at 13:49