4

I'm somewhat new to Python, especially in writing modules and functions in several files and not just raw scripts.

I'm writing a command line app and I would like to have a single function (I call it argpconf) that will parse the command line arguments and set a log level accordingly. Most importantly, I'd like the log level to be set once in this function and to be the same across all modules with minimum overhead when creating their loggers. Additionally, I would like to be able to identify the module from which the message came while using the common formatter:

logging.Formatter("%(levelname)s : %(name)s : %(message)s")

Partially based on a cookiecutter template, I've created the following files:

├── api
│   ├── __init__.py
│   └── some_functionality.py
├── cli.py
├── core
│   ├── argpconf.py
│   ├── __init__.py
│   ├── logger.py
│   └── __version__.py
├── __init__.py
└── __main__.py

core/logger.py has the following content:

from logging import Formatter, Logger as _Logger, NullHandler, StreamHandler

class Logger(_Logger):
    def __init__(self, name=None):
        super(Logger, self).__init__(name or __name__.split(".")[0])
        self.addHandler(NullHandler())  # default to no output
    def start(self, level="WARN", stream=None,
              fmt="%(levelname)s : %(name)s : %(message)s"):
        handler = StreamHandler(stream)
        handler.setFormatter(Formatter(fmt))
        self.addHandler(handler)
        self.setLevel(level.upper())
    def stop(self):
        for handler in self.handlers[1:]:
            # Remove everything but the NullHandler.
            self.removeHandler(handler)

logger = Logger()

Compared to the ideas suggested in the answers to these questions:

I really like the approach taken in the cookiecutter template with the logger since it allows you just to import logger and have a logger object that it's log level is the same across all modules. However, I'm not totally satisfied with it, because in my case the argpconf.py is the first module that starts the logger so all log messages from all modules have their %(name)s substituted with core since it's argpconf.py's __name__.split(".")[0].

How can I improve the logger module so it will detect the module that called it and print log messages with module as the %(name)s and perhaps even the function that prints them?

Vinay Sajip
  • 95,872
  • 14
  • 179
  • 191
Doron Behar
  • 2,606
  • 2
  • 21
  • 24

1 Answers1

5

This approach seems to be making things more complicated than they need to be. I realise this is coming from the cookiecutter template you used, and it's just my opinion, but the approach to logging taken in that template isn't what I would regard as best practice. You know your use case best, but if all I wanted was to

have a single function that will parse the command line arguments and set a log level accordingly. Most importantly, I'd like the log level to be set once in this function and to be the same across all modules with minimum overhead when creating their loggers. Additionally, I would like to be able to identify the module from which the message came while using the common formatter

then the simplest approach is to import argparse and logging in your main script, process command line arguments and set the logging level accordingly, call basicConfig() (as suggested in Brian M. Sheldon's comment) and then dispatch to your application endpoint as determined by command line arguments. Every module you use that needs to log something just needs to import logging and logger = logging.getLogger(__name__) and then logger.debug(...) or whatever, wherever needed in that module. If you stick to this, all modules will use the logging level set in basicConfig() automatically, and if you use the fragment %(name)s in the format= argument to basicConfig(), then you will see the full module name in that place in the logged messages, as a fully-qualified dotted name (like api.some_functionality). This approach will certainly have less overhead in creating loggers than the cookiecutter template does.

Update: I'll update the Python Logging Cookbook with an example. For now, here's a Gist with just the code.

Vinay Sajip
  • 95,872
  • 14
  • 179
  • 191
  • Sounds good but I'm not sure I totally understand. Could you add please some exemplary code please? – Doron Behar Oct 16 '18 at 06:40
  • Thanks for the examples. I think you're using `importlib` to get around this, but isn't it best practice to import modules at the top of files? So in your `app.py`, if you import, say, `start` at the top, then that module's logger will be called first, and you'll end up without DEBUG-level logging from that module. This was a sticking point for my understanding - I wanted to stick to "best practice" with my imports, so it might be helpful if you made it clear why you instead use `importlib` (if that is actually the reason). Thanks again! – Nick P Feb 24 '23 at 20:42
  • `importlib` is only used because I chose to make the module names dynamically determined at runtime - see the comment at line 26 in the Gist. From the logging POV, it doesn't matter what order you import modules in as long as they only do `logger = logging.getLogger(__name__)` at module level - the loggers' level isn't set except by the `basicConfig()` call. Imports ought to be side-effect-free as far as possible, so you shouldn't log in module level code on import, only in functions in the module, as far as possible. – Vinay Sajip Feb 27 '23 at 08:11