6

A Python application we're developing requires a logger. A coworker argues that the logger should be created and configured in every class that's using it. My opinion is that it should be created and configured on application start and passed as a constructor-parameter.

Both variants have their merits and we're unsure what the best practice is.

Asclepius
  • 57,944
  • 17
  • 167
  • 143
Anonymous Coward
  • 856
  • 1
  • 11
  • 27
  • You only need to configure the logger once, and in other classes just import the logging module. It's a good idea to create a 'configure_logging' function (in a separate module or in the main module) so you can import this function to test a class or module. (import the start_logging after `if __name__ == "__main__"` or in a `unittest`) – joente Aug 05 '13 at 07:53
  • That makes sense but for the unit tests I'd like to mock the logger and I don't see how this would work with a logging module. – Anonymous Coward Aug 05 '13 at 08:11
  • To give a better idea how I mean to implement a logger I posted an answer with an example, hope that helps.. of course it's just an idea and I hope you find the best solution for your case. – joente Aug 05 '13 at 11:55

3 Answers3

4

Not usually; it is typically not meant to be passed as a parameter.

The convention is to use log = logging.getLogger(__name__) in the top of each module. The value of __name__ is different for each module. The resultant value of __name__ can then be reflected in each log message.

Asclepius
  • 57,944
  • 17
  • 167
  • 143
0

I think passing logger as parameter isn't good idea. You should consider a global logger as own module, it would be the best idea. For example:
logger.py

import logging
log = logging.getLogger('')

classFoo.py

form logger import log
log.debug('debug message')

classBar.py

form logger import log
log.warn('warn!')
adamr
  • 740
  • 6
  • 18
  • 2
    This isn't a great idea either because you are now always using the logger with the name having an empty string instead of using the logger with the name `__name__`. Please see my answer. – Asclepius Feb 17 '17 at 15:10
0

Maybe this helps you to get an idea? Of course you can make it much better, reading settings from a config file or whatever but this is quick example.

A separate module to configure the logging: mylogmod.py :

import logging

FILENAME = "mylog.log" # Your logfile
LOGFORMAT = "%(message)s" # Your format
DEFAULT_LEVEL = "info" # Your default level, usually set to warning or error for production
LEVELS = {
    'debug':logging.DEBUG,
    'info':logging.INFO,
    'warning':logging.WARNING,
    'error':logging.ERROR,
    'critical':logging.CRITICAL}

def startlogging(filename=FILENAME, level=DEFAULT_LEVEL):
    logging.basicConfig(filename=filename, level=LEVELS[level], format=LOGFORMAT)

The main.py :

import logging
from mylogmod import startlogging
from myclass import MyClass

startlogging()

logging.info("Program started...")
mc = MyClass()

A class myclass.py from a module with self test. You can do something similar in a unittest: (Note that you don't need to import the logging module in a unittest, just the startlogging function is enough. This way you can set the default level to warning or error and the unittests and self tests to debug)

import logging

class MyClass(object):
    def __init__(self):
        logging.info("Initialze MyClass instance...")

if __name__ == "__main__":
    from mylogmod import startlogging
    startlogging(level="debug")
    logging.debug("Test MyClass...")
    #... rest of test code...
joente
  • 858
  • 7
  • 9
  • This is using the root logger. Any sophisticated application that integrates with other applications wouldn't want to use the root logger. – Asclepius Feb 17 '17 at 15:17