146

As the AWS documentation suggests:

import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def my_logging_handler(event, context):
    logger.info('got event{}'.format(event))
    logger.error('something went wrong')

Now I made:

import logging
logging.basicConfig(level = logging.INFO)
logging.info("Hello World!")

The first snippet of code prints in the Cloud Watch console, but the second one no.

I didn't see any difference as the two snippets are using the root logger.

vvvvv
  • 25,404
  • 19
  • 49
  • 81
p.magalhaes
  • 7,595
  • 10
  • 53
  • 108
  • 1
    You are missing "return 'Hello World!'" – Piyush Patil Jun 08 '16 at 13:43
  • 1
    Why not do the same as in the first code snippet? Get the logger that's already instantiated and then use said logger. – HEADLESS_0NE Jun 08 '16 at 13:46
  • 11
    @HEADLESS_0NE: I can use the fist one. But I would like to understand why this behavior. – p.magalhaes Jun 08 '16 at 13:58
  • 1
    Also checkout python-cloud-logger at https://pypi.org/project/python-cloud-logger/. It provides options to have context logging where requestId and other context variables can be saved to thread's context. And every log would carry the context from then on. – vasu May 14 '19 at 18:14

11 Answers11

160

The reason that logging does not seem to work is because the AWS Lambda Python runtime pre-configures a logging handler that, depending on the version of the runtime selected, might modify the format of the message logged, and might also add some metadata to the record if available. What is not preconfigured though is the log-level. This means that no matter the type of log-message you try to send, it will not actually print.

As AWS documents themselves, to correctly use the logging library in the AWS Lambda context, you only need to set the log-level for the root-logger:

import logging
logging.getLogger().setLevel(logging.INFO)

If you want your Python-script to be both executable on AWS Lambda, but also with your local Python interpreter, you can check whether a handler is configured or not, and fall back to basicConfig (which creates the default stderr-handler) otherwise:

if len(logging.getLogger().handlers) > 0:
    # The Lambda environment pre-configures a handler logging to stderr. If a handler is already configured,
    # `.basicConfig` does not execute. Thus we set the level directly.
    logging.getLogger().setLevel(logging.INFO)
else:
    logging.basicConfig(level=logging.INFO)
Pit
  • 3,606
  • 1
  • 25
  • 34
  • 26
    Instead of `len(logging.getLogger().handlers) > 0`, use `logging.getLogger().hasHandlers()` may be better. – ebk Apr 17 '20 at 08:57
  • 5
    @ebk that is a very good point, but `hasHandlers` has only been available since Python 3.2. Since [AWS still supports the Python 2.7 runtime](https://aws.amazon.com/blogs/compute/continued-support-for-python-2-7-on-aws-lambda/), using `len(...handlers)` is the most portable solution at the moment. – Pit Apr 18 '20 at 09:18
  • 9
    You're right. I should've added "for Python 3.2+" :) – ebk Apr 18 '20 at 11:18
  • 3
    what about `if logging.getLogger().handlers:` for python < 3.2? reads similar to the 3.2+ version and is more idiomatic. – aydow Feb 13 '22 at 22:29
  • @aydow With you - but slightly obfuscating if the reader does not know what type is expected in advance. – jtlz2 Dec 15 '22 at 15:14
76

Copied straight from the top answer in the question @StevenBohrer's answer links to (this did the trick for me, replacing the last line with my own config):

root = logging.getLogger()
if root.handlers:
    for handler in root.handlers:
        root.removeHandler(handler)
logging.basicConfig(format='%(asctime)s %(message)s',level=logging.DEBUG)
Brett Beatty
  • 5,690
  • 1
  • 23
  • 37
  • 3
    Thanks, this is great. I put this just after my import statements, then my entire lambda/module had access to to my own personal logger :) Just added this line after basicConfig: logger = logging.getLogger() – atlas_scoffed May 05 '18 at 07:09
  • 25
    Since python 3.8 there is a new parameter `force` that does exaclty what you described above: `logging.basicConfig(level = logging.INFO, force=True)`. https://docs.python.org/3/library/logging.html#logging.basicConfig – linqu Jun 25 '21 at 10:11
19

I've struggled with this exact problem. The solution that works both locally and on AWS CloudWatch is to setup your logging like this:

import logging

# Initialize you log configuration using the base class
logging.basicConfig(level = logging.INFO)

# Retrieve the logger instance
logger = logging.getLogger()

# Log your output to the retrieved logger instance
logger.info("Python for the win!")

Kinman
  • 1,361
  • 9
  • 14
17

I had a similar problem, and I suspect that the lambda container is calling logging.basicConfig to add handlers BEFORE the lambda code is imported. This seems like bad form...

Workaround was to see if root logger handlers were configured and if so, remove them, add my formatter and desired log level (using basicConfig), and restore the handlers.

See this article Python logging before you run logging.basicConfig?

Community
  • 1
  • 1
Steven Bohrer
  • 171
  • 1
  • 2
10

It depends upon the aws lambda python version

If python version 3.8 and above

import os
import logging

default_log_args = {
    "level": logging.DEBUG if os.environ.get("DEBUG", False) else logging.INFO,
    "format": "%(asctime)s [%(levelname)s] %(name)s - %(message)s",
    "datefmt": "%d-%b-%y %H:%M",
    "force": True,
}


logging.basicConfig(**default_log_args)
log = logging.getLogger("Run-Lambda")

log.info("I m here too)

If python version 3.7 and below

import os
import logging

root = logging.getLogger()
if root.handlers:
    for handler in root.handlers:
        root.removeHandler(handler)

default_log_args = {
    "level": logging.DEBUG if os.environ.get("DEBUG", False) else logging.INFO,
    "format": "%(asctime)s [%(levelname)s] %(name)s - %(message)s",
    "datefmt": "%d-%b-%y %H:%M"
}

logging.basicConfig(**default_log_args)
log = logging.getLogger("Run-Lambda")

log.info("Iam here")

Vijay Anand Pandian
  • 1,027
  • 11
  • 23
  • 1
    Any idea why `force = True` is required? Why is "not logging" the default logging behavior? – weberc2 Oct 20 '22 at 03:50
  • 1
    plus one for the force! – jtlz2 Dec 15 '22 at 15:17
  • 2
    @weberc2 Logs are expensive in volume, can cause accidental PII leakage, by default one wouldn't want in python to kill off any logger (coder's choice). – jtlz2 Dec 15 '22 at 15:19
  • force `force = True` If this keyword argument is specified as true, any existing handlers attached to the root logger are removed and closed, before carrying out the configuration as specified by the other arguments. – Vijay Anand Pandian Jan 04 '23 at 09:54
5

Probably not referencing the same logger, actually. In the first snippet, log the return of: logging.Logger.manager.loggerDict

It will return a dict of the loggers already initialized.

Also, from the logging documentation, an important note on logging.basicConfig:

Does basic configuration for the logging system by creating a StreamHandler with a default Formatter and adding it to the root logger. The functions debug(), info(), warning(), error() and critical() will call basicConfig() automatically if no handlers are defined for the root logger.

This function does nothing if the root logger already has handlers configured for it.

Source: https://docs.python.org/2/library/logging.html#logging.basicConfig

Community
  • 1
  • 1
HEADLESS_0NE
  • 3,416
  • 4
  • 32
  • 51
  • The snippets are separated. So the second snippet there isnt logging configured, so It will configure the root logger. And if I call logging.info it will use the root logger. For me makes no difference from first snippet. – p.magalhaes Jun 08 '16 at 15:11
  • @HEADLESS_0NE is right here. It seems that in lambda a logger is already configured. If I do the above but set the level to DEBUG then I see more logs than I'm producing (I'm not producing any of these myself): `[DEBUG] 2016-10-29T09:01:28.376Z 45e6c8bd-9db6-11e6-aa56-43d43acb066b Acquiring 0 [DEBUG] 2016-10-29T09:01:28.389Z 45e6c8bd-9db6-11e6-aa56-43d43acb066b IOWriteTask({'offset': 0}) about to wait for the following futures [] [DEBUG] 2016-10-29T09:01:28.389Z 45e6c8bd-9db6-11e6-aa56-43d43acb066b IOWriteTask({'offset': 0}) done waiting for dependent futures` – Brad M Oct 29 '16 at 09:03
2

I would suggest use aws python lambda powertools. The logging doc is here. Code example:

from aws_lambda_powertools import Logger
logger = Logger() # Sets service via env var
# OR logger = Logger(service="example")

It works both locally and on CloudWatch for me.

Yuantao
  • 2,812
  • 1
  • 18
  • 19
0

Essentially, the AWS logging monkey patch needs to be handled in a very particular way, where:

  1. The log level is set from the TOP level of the script (e.g., at import time)
  2. The log statements you are interested in are invoked from within the lambda function

Since it's generally considered good form not to run arbitrary code in Python module import, you usually should be able to restructure your code so that the heavy lifting occurs only inside the lambda function.

Edward Z. Yang
  • 26,325
  • 16
  • 80
  • 110
0

I have also solved this issue so that logging would not require change for local and on aws. Below is the sample code:

def set_default_logger():
    if "LOG_LEVEL" in os.environ:
        # For Lambda
        log_level = os.environ["LOG_LEVEL"] 
    else:
        log_level = DEFAULT_LOG_LEVEL # Set default log level for local

    root = logging.getLogger()
    if len(logging.getLogger().handlers) > 0:
        # For Lambda
        for handler in root.handlers:
            root.removeHandler(handler)
            logging.basicConfig(level=log_level,
                                format='[%(asctime)s.%(msecs)03d] [%(levelname)s] [%(module)s] [%(funcName)s] [L%(lineno)d] [P%(process)d] [T%(thread)d] %(message)s',
                                datefmt='%Y-%m-%d %H:%M:%S')
    else:
        # For Local
        l_name = os.getcwd()+'/'+'count_mac_module.log'
        logging.basicConfig(filename=l_name, level=log_level,
                            format='[%(asctime)s.%(msecs)03d] [%(levelname)s] [%(module)s] [%(funcName)s] [L%(lineno)d] [P%(process)d] [T%(thread)d] %(message)s',
                            datefmt='%Y-%m-%d %H:%M:%S')  
    
    logger = logging.getLogger(__name__)
    logger.debug(f"************* logging set for Lambda {os.getenv('AWS_LAMBDA_FUNCTION_NAME') } *************")
Ashish Sharma
  • 574
  • 7
  • 18
-1
    LOGGER = logging.getLogger()
    HANDLER = LOGGER.handlers[0]
    HANDLER.setFormatter(
        logging.Formatter(“[%(asctime)s] %(levelname)s:%(name)s:%(message)s”, “%Y-%m-%d %H:%M:%S”)
    )
nejdetckenobi
  • 564
  • 2
  • 8
  • 24
  • 2
    As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Dec 15 '21 at 15:33
-1
import os
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    logger.info('## ENVIRONMENT VARIABLES')
    logger.info(os.environ)
    logger.info('## EVENT')
    logger.info(event)`enter code here`
Huzefa Khan
  • 127
  • 5