14

I am running my django app via uwsgi server and am starting 32 processes -args in my init script are:

ARGS="--pidfile ${PIDFILE} --uid ${UID} -s /tmp/${NAME}.sock --pythonpath ${GCS_HOME}/server/src/gcs --master -w wsgi -d ${GCS_HOME}/logs/uwsgi.log -p 32 -z 30"

Versions are Python 2.6.5 , Django 1.2.1, uWSGI 0.9.5.1

I want to have a single log file so I am using a multprocessing based log handler as described in question 641420.

The multilogging handler works fine in a simple test app that I have and also when I run the manage.py runserver_plus with werkzeug, but nothing is logged when I run with django and uwsgi (I get no errors or exceptions from uwsgi process either though) .

My wsgi file is below, if anyone can identify a problem with my config or an explanation for what is happening I'd be grateful:

APP_VIRTUAL_ENV = "/home/devadmin/gcs/server/gcs_env/"
APP_PARENT_PATH = "/home/devadmin/gcs/server/src/"

##                                                                              

import sys
# Redirect stdout to comply with WSGI                                           
sys.stdout = sys.stderr

import os, site

# Set the settings module django should use                                     
os.environ['DJANGO_SETTINGS_MODULE'] = "gcs.settings"

# set the sys.path                                                              
site_packages_subpath = "/lib/python%s.%s/site-packages" % (sys.version_info[0]\
, sys.version_info[1], )
site_packages_path = os.path.join(APP_VIRTUAL_ENV, site_packages_subpath[1:])

sys_path = []
for path in sys.path:
    if site_packages_subpath in path and not path.startswith(APP_VIRTUAL_ENV):
        continue
    sys_path.append(path)

sys.path = [ APP_PARENT_PATH ]
sys.path += sys_path
site.addsitedir(site_packages_path)

# reorder sys.path                                                              
for path in sys_path:
    sys.path.remove(path)
sys.path += sys_path

# setup logging                                                                 
import os.path
import logging
import logging.config
logging.config.fileConfig(os.path.join(os.path.dirname(__file__), "logging.conf\
"))
Community
  • 1
  • 1
rtmie
  • 531
  • 1
  • 4
  • 16
  • Hard to tell, what does your config file look like? What version of Python are you running? You're importing but not using `multiproc_handler`, and you're not using `log_conf_file` that you've computed in the actual `fileConfig` call, for some reason. – Vinay Sajip Nov 26 '10 at 11:40
  • Added versions above and removed spurious lines from wsgi.py (they were left over from some debugging I was doing. Also noted that when I use werkzeug/runserver_plus, logging is ok. So it would indicate that somehow my logging is not correctly initialised via wsgi.py. When I use a standard python logging handler (RotatingLogFileHandler) I get log output but this is not a solution for multiple uwsgi processes. – rtmie Nov 26 '10 at 13:42
  • I think this is because permissions on log folder. Maybe you run debug server from one user and production from another? maybe you even know this.. but it must be permissions. Try setting rwx on log folder and its parent to that user.. or as a debug set rwx to all. – SanityIO May 06 '11 at 09:28

1 Answers1

13

ANSWER HAS BEEN UPDATED -May 15, 2013 - see bottom for additional logging option

If you want to have a single log file - use syslog, let it handle multiplexing all the inputs into a single file. Having multiple processes appending to a single file is ugly, even with multiprocessing's workarounds.

Aside from the advantage of thread / process safe 'downmixing' of various streams of logging information, you can always specify a remote host to send the logs to if you wish, as well it makes log-file rotation a breeze as your clients are writing to either a domain socket or UDP socket - they don't have to wait while you manage the files underneath them. And better yet, you won't lose messages.

Used in combination with a syslog daemon like syslog-ng, you can do lots of fancy slicing and dicing, message relaying, duplicate message filtering, etc.

Long story short - syslog is better than managing your own log file (in my opinion), the best argument against syslog is, you don't 'own' the server (and, ostensibly the log files may be off limits to you).

If you want to be super awesome, send your log data to splunk and you'll take your game to the next level. Most folks use Splunk for IT log aggregation, but syslogging from your application into splunk is a shortcut to awesome data mining capabilities to understand performance bottlenecks, use patterns and much more.

#!/usr/bin/python

import logging
from logging.handlers import SysLogHandler

# Setup
logger = logging.getLogger( "mything" )
hdlr = SysLogHandler( address = '/dev/log', facility = SysLogHandler.LOG_USER )
logger.addHandler( hdlr )
formatter = logging.Formatter('%(name)s: %(levelname)s %(message)s')
hdlr.setFormatter( formatter )
logger.setLevel( logging.INFO )


logger.info( 'hello Laverne!' )
logger.debug( 'The Great Ragu has taken ill!' )

NEW CONTENT - May 15, 2013

There is an additional option worth mentioning if you have the infrastructure / tenacity to set it up - Sentry, which has libraries available for Python (as well as Javascript and others), which provides a centralized location for you to send errors to for monitoring. It looks neat.

synthesizerpatel
  • 27,321
  • 5
  • 74
  • 91
  • 1
    For those interested in adding the SysLog handler thru the Django settings.py configuration, please see @raacer answer on the related question: [link](http://stackoverflow.com/a/12411547/255117) – Johnny May 03 '17 at 18:13