0

What we are trying:
We are trying to Run a Cloud Run Job that does some computation and also uses one our custom package to do the computation. The cloud run job is using google-cloud-logging and python's default logging package as described here. The custom python package also logs its data (only logger is defined as suggested here).

Simple illustration:

from google.cloud import logging as gcp_logging
import logging
import os
import google.auth
from our_package import do_something

def log_test_function():
    SCOPES = ["https://www.googleapis.com/auth/cloud-platform"]
    credentials, project_id = google.auth.default(scopes=SCOPES)
    try:
        function_logger_name = os.getenv("FUNCTION_LOGGER_NAME")

        logging_client = gcp_logging.Client(credentials=credentials, project=project_id)
        logging.basicConfig()
        logger = logging.getLogger(function_logger_name)
        logger.setLevel(logging.INFO)
        logging_client.setup_logging(log_level=logging.INFO)

        logger.critical("Critical Log TEST")
        logger.error("Error Log TEST")
        logger.info("Info Log TEST")
        logger.debug("Debug Log TEST")

        result = do_something()
        logger.info(result)
    except Exception as e:
        print(e)    # just to test how print works

    return "Returned"

if __name__ == "__main__":
    result = log_test_function()
    print(result)

Cloud Run Job Logs Cloud Run Job Logs

The Blue Box indicates logs from custom package
The Black Box indicates logs from Cloud Run Job
The Cloud Logging is not able to identify the severity of logs. It parses every log entry as default level.

But if I run same code in Cloud Function, it seems to work as expected (i.e. severity level of logs from cloud function and custom package is respected) as shown in image below.

Cloud Function Logs Cloud Function Logs


Both are serverless architecture than why does it works in Cloud Function but not in Cloud Run.

What we want to do:
We want to log every message from Cloud Run Job and custom package to Cloud Logging with correct severity.

We would appreciate your help guys!


Edit 1
Following Google Cloud Python library commiters solution. Almost solved the problem. Following is the modified code.

from google.cloud import logging as gcp_logging
import logging
import os
import google.auth
from our_package import do_something
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers import setup_logging
from google.cloud.logging_v2.resource import Resource
from google.cloud.logging_v2.handlers._monitored_resources import retrieve_metadata_server, _REGION_ID, _PROJECT_NAME

def log_test_function():
    SCOPES = ["https://www.googleapis.com/auth/cloud-platform"]
    region = retrieve_metadata_server(_REGION_ID)
    project = retrieve_metadata_server(_PROJECT_NAME)
    try:
        function_logger_name = os.getenv("FUNCTION_LOGGER_NAME")

        # build a manual resource object
        cr_job_resource = Resource(
            type="cloud_run_job",
            labels={
                "job_name": os.environ.get('CLOUD_RUN_JOB', 'unknownJobId'),
                "location": region.split("/")[-1] if region else "",
                "project_id": project
            }
        )

        logging_client = gcp_logging.Client()
        gcloud_logging_handler = CloudLoggingHandler(logging_client, resource=cr_job_resource)
        setup_logging(gcloud_logging_handler, log_level=logging.INFO)

        logging.basicConfig()
        logger = logging.getLogger(function_logger_name)
        logger.setLevel(logging.INFO)

        logger.critical("Critical Log TEST")
        logger.error("Error Log TEST")
        logger.warning("Warning Log TEST")
        logger.info("Info Log TEST")
        logger.debug("Debug Log TEST")

        result = do_something()
        logger.info(result)
    except Exception as e:
        print(e)    # just to test how print works

    return "Returned"

if __name__ == "__main__":
    result = log_test_function()
    print(result)

Now the every log is logged twice one severity sensitive log other severity insensitive logs at "default" level as shown below. Logs

2 Answers2

0

Cloud Functions get your code, wrap it in a webserver, build a container and deploy it.

With Cloud Run, you only build and deploy the container.

That means, Cloud Functions webserver wrapper do something more that you do: it initialize correctly the logger in Python.

Have a look to that doc page, you should solve your issue with it


EDIT 1

I took the exact example and I added it in my flask server like that

import os
from flask import Flask

app = Flask(__name__)

import google.cloud.logging

# Instantiates a client
client = google.cloud.logging.Client()

# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.setup_logging()
# [END logging_handler_setup]

# [START logging_handler_usage]
# Imports Python standard library logging
import logging

@app.route('/')
def call_function():
    # The data to log
    text = "Hello, world!"

    # Emits the data using the standard logging module
    logging.warning(text)
    # [END logging_handler_usage]

    print("Logged: {}".format(text))
    return text


# For local execution
if __name__ == "__main__":
    app.run(host='0.0.0.0',port=int(os.environ.get('PORT',8080)))

A rough copy of that sample.

And the result is correct. A Logged entry with the default level (the print), and my Hello world in warning status (the logging.warning)

enter image description here


EDIT 2

Thanks to the help of the Google Cloud Python library commiters, I got a solution to my issue. By waiting the native integration in the library.

Here my new code for Cloud Run Jobs this time

import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers import setup_logging
from google.cloud.logging_v2.resource import Resource
from google.cloud.logging_v2.handlers._monitored_resources import retrieve_metadata_server, _REGION_ID, _PROJECT_NAME
import os

# find metadata about the execution environment
region = retrieve_metadata_server(_REGION_ID)
project = retrieve_metadata_server(_PROJECT_NAME)

# build a manual resource object
cr_job_resource = Resource(
    type = "cloud_run_job",
    labels = {
        "job_name": os.environ.get('CLOUD_RUN_JOB', 'unknownJobId'),
        "location":  region.split("/")[-1] if region else "",
        "project_id": project
    }
)

# configure handling using CloudLoggingHandler with custom resource
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client, resource=cr_job_resource)
setup_logging(handler)

import logging

def call_function():
    # The data to log
    text = "Hello, world!"

    # Emits the data using the standard logging module
    logging.warning(text)
    # [END logging_handler_usage]

    print("Logged: {}".format(text))
    return text


# For local execution
if __name__ == "__main__":
    call_function()

And the result works great:

  • Logged: .... entry is the simple "Print" to the stdout. Standard level
  • Warning entry as expected enter image description here
guillaume blaquiere
  • 66,369
  • 2
  • 47
  • 76
  • thank you for your response. Sorry, but I didn't get exactly what you are trying to say. I have followed the docs but still get the same result as shown in the image above for Cloud Run Job. Perhaps my code is a slight modification of the docs code. I don't see any problem how logger is initialized in the above code. – Devesh Poojari Nov 04 '22 at 08:56
  • I have spent long enough on this problem, but with no success :/ I would appreciate your help! – Devesh Poojari Nov 04 '22 at 08:57
  • I updated my answer. Nothing special, only a copy-paste of the doc. I didn't test with jobs, but it should be the same. – guillaume blaquiere Nov 04 '22 at 15:58
  • I have tried your solution as well, it does not work inside Cloud Run Job. But it works in Cloud Function. – Devesh Poojari Nov 04 '22 at 17:50
  • indeed, doesn't work with jobs. let me search... – guillaume blaquiere Nov 04 '22 at 21:29
  • Spotted. In fact it works, but the label type of the logs is not correct, and so you don't find it related to your Cloud Run jobs run. But, have a look to your log entries in the whole Cloud Logging, you will find it! I will report the issue to the Google Cloud PM – guillaume blaquiere Nov 04 '22 at 22:04
  • Yes, I guess we can't find it under Cloud Run Job because it is getting allocated under Google Compute Engine (created by cloud run job) i.e. type=gce_instance instead of type=cloud_run_job. To summarize, logs are getting logged for cloud run job (severity insensitive logs) as well as Compute engine (severity sensitive logs). Thanks Guillaume! – Devesh Poojari Nov 05 '22 at 16:12
  • Could you please leave the link to the issue here so that we can keep a track of the progress. Thank you. – Devesh Poojari Nov 05 '22 at 16:15
  • It's the same issue in Golang. Issue Python: https://github.com/googleapis/python-logging/issues/663 – guillaume blaquiere Nov 06 '22 at 20:47
  • Have a look to the answer. It's interesting. I didn't test, but it sounds a good home made solution by waiting the final integration in the library. – guillaume blaquiere Nov 08 '22 at 12:57
  • I just tested and the code in the Gihub comment works perfectly! – guillaume blaquiere Nov 08 '22 at 13:46
  • I tried the solution from the Github comment. I am getting double entry for each log, one severity sensitive log (which is required) other severity insensitive logs at "default" level (like before). @guillaume you only got severity sensitive logs? – Devesh Poojari Nov 08 '22 at 16:34
  • I have updated my the post. I have tried some other configuration as well (eg. adding cloud logging handler explicitly to logger, etc.) but still same result. – Devesh Poojari Nov 09 '22 at 17:03
  • import Logging AFTER the configuration of the logger – guillaume blaquiere Nov 09 '22 at 19:21
  • Somehow I have reached my quota limit :/ (Quota exceeded for quota metric 'Job run requests' and limit 'Job run requests per minute per region'). I'll try your suggestion once quota gets reset. Thanks – Devesh Poojari Nov 10 '22 at 13:28
  • Hi I resolved my quota problem and I tried importing logging after configuring the logger as suggested, but still I get double logs :/ – Devesh Poojari Nov 30 '22 at 16:16
0

you might want to get rid of loggers you don't need. take a look at https://stackoverflow.com/a/61602361/13161301

Dokook Choe
  • 266
  • 2
  • 9