2

So currently I manage two CherryPy web apps developed by previous developers(different).

One is written in python2.7 and the other python3.0. Currently they both use the cherrypy webserver and start their own instance, as they are independent apps with different purposes and different databases.

I have been tasked with developing a third and maybe more applications that will run on the same server.

It has been deemed inefficient to instantiate a new webserver on a different port for every app we develop, so we are looking for a solution to this.

The first thing I found was mod_wsgi for apache but I quickly found out that this would only be able to handle one python version.

So as of right now, what solution exists for such a setup?

These are all internal apps with low traffic, but we do not want 6 different servers running, each on its own port.

ashrles
  • 21
  • 1

2 Answers2

3

You can use nginx (probably with gunicorn). I'm fairly sure that same thing can be done with apache, but I have no experience with setting up WSGI with Apache.

The main point is that you'll actually need to run few different internal web-servers, one for each app and you'll be able to unite them on the same port with nginx

I'm not sure if CherryPy supports listening to file socket instead of port, so if it can - you may omit gunicorn and configure nginx directly with CherryPy servers.

Here's an example config for nginx. You can create different subdomains or subURLs in nginx.conf, i.e.:

upstream app1 {
    server unix:/var/run/user/app1.sock fail_timeout=0;
}
upstream app2 {
    server unix:/var/run/user/app2.sock fail_timeout=0;
}

...

server {
    listen 0.0.0.0:80;
    server_name app1.yourdomain.com;
    location / {
        proxy_pass http://app1;
    }
}
server {
    listen 0.0.0.0:80;
    server_name app2.yourdomain.com;
    location / {
        proxy_pass http://app2;
    }
}

...and in your gunicorn config for each app (BTW, you'll have two gunicorns - gunicorn-2.7 and gunicorn-3.2) you'll have to bind the application not to listen on port, but rather on unix socket, e.g. /var/run/user/app1.sock

This will result in having two different applications for two subdomains. These applications can even be written in different languages.

If you navigate to http://*****.yourdomain.com, your request will be sent to the corresponding gunicorn instance and thus appropriate application will handle it.

I think you'll want gunicorn in any case: it lets you bind application to file socket and many other useful features, e.g. running multiple workers of the same app.

Andrew Dunai
  • 3,061
  • 19
  • 27
3

In fact what you try to avoid pretty much looks like a microservice architecture (Martin Fowler article). Basically it proposes minimal self-contained services which are multiplexed to act as a whole, instead of one monolithic application. It has its pro and cons, but today it is considered more a good. At least at big scale.

Thus one way of designing your application is a microservice architecture, and running several internal servers isn't a problem. Just note that some complexity in this approach is shifted to infrastructure. That is to say you need quality deployment, monitoring, etc.

The idea of Andew's answer is correct. But specifically CherryPy is a full-featured HTTP server. You don't generally need another intermediate point of failure, gunicorn or so, and you can avoid WSGI altogether. Just use HTTP, i.e. nginx acts as reverse HTTP proxy to CherryPy internal HTTP servers.

In simplest way it looks like the following.

python 2 app

#!/usr/bin/env python
# -*- coding: utf-8 -*-


import cherrypy



config = {
  'global' : {
    'server.socket_host' : '127.0.0.1',
    'server.socket_port' : 8080,
    'server.thread_pool' : 8
  },
  '/' : {
    'tools.proxy.on' : True 
  }  
}


class App:

  @cherrypy.expose
  def index(self):
    return type({}.keys()).__name__


if __name__ == '__main__':
  cherrypy.quickstart(App(), '/app1', config)

python 3 app

#!/usr/bin/env python3


import cherrypy


config = {
  'global' : {
    'server.socket_host' : '127.0.0.1',
    'server.socket_port' : 8081,
    'server.thread_pool' : 8
  },
  '/' : {
    'tools.proxy.on' : True 
  }
}


class App:

  @cherrypy.expose
  def index(self):
    return type({}.keys()).__name__


if __name__ == '__main__':
  cherrypy.quickstart(App(), '/app2', config)

nginx config

server {
  listen  80;

  server_name ngx-test;

  root /var/www/ngx-test/www;

  location /app1 {
    proxy_pass         http://127.0.0.1:8080;
    proxy_set_header   Host             $host;
    proxy_set_header   X-Real-IP        $remote_addr;
    proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
  }

  location /app2 {
    proxy_pass         http://127.0.0.1:8081;
    proxy_set_header   Host             $host;
    proxy_set_header   X-Real-IP        $remote_addr;
    proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
  }

}

For full-blown CherryPy deployment take a look at this answer.

Community
  • 1
  • 1
saaj
  • 23,253
  • 3
  • 104
  • 105
  • I like how this minimizes overhead and points of failure. Does this reduce the amount of resources consumed by the microservice architecture though? Because it looks like gunicorn would bring all the apps under the same umbrella, does this do something similar? It looks like I still need to start the app as its own process, wouldn't quickstart instantiate a "new" server? – ashrles Dec 09 '14 at 17:45
  • First, my answer suggests to use *CherryPy <= HTTP => Nginx*, instead of *CherryPy <= WSGI => gunicorn <= HTTP => Nginx*, and gives you reason to justify multi-app architecture. Second, less intermediaries, less resources consumed. Third, yes, you still need to start each app as a separate process, plus nginx to multiplex them. Examples above are demo only as `quickstart()` is not the way to deploy CherryPy. For real-world deployment follow the link. – saaj Dec 09 '14 at 19:39