I'd probably rephrase your question as "why is there a separate web and application tier in Ruby apps?"
In production deployments of Ruby applications there is typically a web tier (e.g. Apache or Nginx) and an application tier (e.g. Unicorn, Thin, Passenger). The web tier and application tiers serve different purposes:
Web tier - Manages HTTP connections, which are potentially persistent and long-lived. Usually responsible for some configuration of the production deployment (normalizing URLs through rewrites, blocking categories of bad requests, etc.). Sometimes responsible for HTTPS termination (especially in environments without a load balancer). Sometimes responsible for serving static assets, a task at which web servers excel. Most web servers can handle thousands of concurrent requests, with minimal resources needed per request. So if the web server can handle a request without hitting the app tier, it's strongly preferable for the web server to handle the request.
Application tier - Manages requests to the application itself, which usually require some amount of application logic and access to the data storage tier. Requests are generally expected to be short lived (max a few seconds and ideally a few 10s of msec, Rails Live Streaming excepted). Concurrency is far more limited in the application tier - most app servers can handle a much smaller number of concurrent requests (1 per process for Thin/Unicorn).
Note this architecture is relatively common to other languages - PHP, Java - as these differentiations largely hold true in systems running those languages as well.
It is possible to run with a unified web and application tier, but that generally requires a system that decouples requests from threads or processes - meaning that one doesn't need a thread or process for each concurrent request. It adds some complexity to the development side (see Node.js) but can have significant scalability benefits.