13

I'd like to be able to do simple and consistent logging in my code using the Python logging facility.

I am able to do the following:

  1. I'd like all existing/future module and functions to have "Entering ..." and "Done ..." log messages.
  2. I don't want to add the same code snippet to defining logging parameters (shown below don't want to add everywhere) in every function definition.
  3. I want the log.info(...) etc constructs to work in any function I define in the project hierarchy.

What doesn't work / I don't know how to do it:

  1. I'd like to avoid defining the same @log decorator in every existing/new module that I write.
# don't want to add everywhere
FORMAT = '%(asctime)s - %(name)-20s - %(levelname)-5s - %(message)s'
LEVEL = logging.DEBUG
logging.basicConfig(format=FORMAT, level=LEVEL)
log = logging.getLogger(__name__)

**Sample code from my Flask project:**
# app/__init__.py
from a import b  # various other imports required for app
import logging
FORMAT = '%(asctime)s - %(name)-20s - %(levelname)-5s - %(message)s'
LEVEL = logging.DEBUG
logging.basicConfig(format=FORMAT, level=LEVEL)
log = logging.getLogger(__name__)
# ... various other app init code
from app import views, models
#app/views.py
from c import d  # various other imports required for the module

def logger(fn):
    from functools import wraps
    import inspect
    @wraps(fn)
    def wrapper(*args, **kwargs):
        global log
        log = logging.getLogger(inspect.stack()[1][3])
        log.info('About to run %s' % fn.__name__)

        out = fn(*args, **kwargs)

        log.info('Done running %s' % fn.__name__)
        # Return the return value
        return out
    return wrapper

    @app.route('/this_func')
    @logger
    def this_func():
        log.info('I am doing logging without having to do bunch of definitions.')
        # some more code

    @app.route('/that_func')
    @logger
    def that_func():
        log.info('Yet more logging without having to do bunch of definitions.')
        log.info('I can simply refer to the log object and be done with it.')
        # some more code
Matěj Šmíd
  • 766
  • 4
  • 11
HanSooloo
  • 713
  • 2
  • 7
  • 20
  • 1
    "Ideally, I should be able to do something like this"... what about "this" isn't working? What's your question? – darthbith May 02 '14 at 19:24
  • Right, that log_calls decorator looks like it's doing the job already. – dano May 02 '14 at 19:25
  • 1
    You're calling `basicConfig()` lots of times. Don't do that. It should only be called once from `if __name__ == '__main__'`. – Vinay Sajip May 02 '14 at 19:39
  • @VinaySajip can you elaborate? Where is basicConfig()? Also, I thought __main__ is only called when module is executed directly, e.g., python view.py not when `from app import view` – HanSooloo May 02 '14 at 19:52
  • Tried to clarify the question as only one challenge remains: how to avoid defining the `@logger` decorator in every module. – HanSooloo May 04 '14 at 04:25
  • 1
    @HanSooloo Just write the function in another file and import it. – muehsi Nov 20 '17 at 13:51

2 Answers2

13

The final setup that worked for me was the following:

# At the beginning of every .py file in the project
def logger(fn):
    from functools import wraps
    import inspect
    @wraps(fn)
    def wrapper(*args, **kwargs):
        log = logging.getLogger(fn.__name__)
        log.info('About to run %s' % fn.__name__)

        out = fn(*args, **kwargs)

        log.info('Done running %s' % fn.__name__)
        # Return the return value
        return out
    return wrapper

# Do the following section only in application's app/__init__.py
# Other files will pick it up from here.
FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(funcName)s - %(message)s'

# Change logging LEVEL according to debugging needs.
# Probably better to read this from a config or a launch parameter.
LEVEL = logging.DEBUG

logging.basicConfig(format=FORMAT, level=LEVEL)
# Up to here only for app/__init__.py

# This section again at the beginning of every .py file
log = logging.getLogger(__name__)
log.info('Entered module: %s' % __name__)

Now, add a @logger decorator to every function definition in the module:

@logger
def do_something():
    print('doing really useful stuff.')
Matěj Šmíd
  • 766
  • 4
  • 11
HanSooloo
  • 713
  • 2
  • 7
  • 20
  • where does the apply func come in? – Jeff Tilton Oct 29 '19 at 15:09
  • @JeffTilton I've not had the chance to try this myself but the apply function would most likely refer to the function that is being wrapped by the `@logger` decorator. In this example, that would be `do_something`. – Karthic Raghupathi Jun 04 '20 at 17:28
  • @JeffTilton `apply` was used i Python 2, now you can use `fn(*args, **kwargs)` instead. – Matěj Šmíd Mar 13 '21 at 04:25
  • I have been using essentially this for several years. It works fine in operation, but during debugging it's a minor PITA when stepping into an @logger decorated function. Worth it, but I'd really like an EVEN better solution. – griswolf Apr 19 '21 at 18:21
3

Using contextlib with exception handling.

from contextlib import contextmanager
import logging
from time import time 
logging.basicConfig(level=logging.INFO, format='%(asctime)s | %(levelname)s | %(processName)s | %(thread)d | %(message)s')
log = logging.getLogger(__name__)

@contextmanager
def EEL(methodDescr, raiseExcp=False):
  '''EEL = Entry Exit Log'''
  log.info(f"Starting function: {methodDescr}")
  start = time()
  try:
    yield
  except Exception as ex:
    log.exception("Got exception")
  finally:
    end = time()
    log.info(f"Time taken by {methodDescr}: {end-start}")

Usage:

@EEL("Test method")
def test():
  x = 0
  for i in range(1,1000000):
    x = x + i

Output:

2020-09-22 11:09:17,418 | INFO | MainProcess | 139709531875200 | Starting function: Test method
2020-09-22 11:09:17,484 | INFO | MainProcess | 139709531875200 | Time taken by Test method: 0.06573295593261719
S2L
  • 1,746
  • 1
  • 16
  • 20
  • Whenever you're timing something in python you should use `time.perf_counter()` instead of `time()`. – AlexLordThorsen Dec 18 '20 at 23:02
  • 1
    From this link (https://stackoverflow.com/a/25787875/2612429), it appears that perf_counter returns unit-less measurement, that cannot be related to real-world time and may vary across different systems. If that's true, your suggestion might be fine if one is comparing performance on a single machine only. – S2L Dec 25 '20 at 22:51
  • Solid point. It still works on the distributed systems I work with. You need your logging statements to contain something like a process id, thread id, or a request id. Something that lets you establish a timeline being read over a log file or log stream. Having the UTC time can be useful but when I'm doing performance comparisons I find it more useful to have the time between calls be in milliseconds. – AlexLordThorsen Dec 28 '20 at 00:13