2

I am using Azure function service bus trigger in Python to receive messages in batch from a service bus queue. Even though this process is not well documented in Python, but I managed to enable the batch processing by following the below Github PR.

https://github.com/Azure/azure-functions-python-library/pull/73

Here is the sample code I am using -

function.json

{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "msg",
      "type": "serviceBusTrigger",
      "direction": "in",
      "cardinality": "many",
      "queueName": "<some queue name>",
      "dataType": "binary",
      "connection": "SERVICE_BUS_CONNECTION"
    }
  ]
}

__init__.py

import logging

import azure.functions as func
from typing import List

def main(msg: List[func.ServiceBusMessage]):
    message_length = len(msg)
    if message_length > 1:
        logging.warn('Handling multiple requests')

    for m in msg:
       #some call to external web api

host.json

"version": "2.0",
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[3.3.0, 4.0.0)"
  },
  "extensions": {
    "serviceBus": {
      "prefetchCount": 100,
      "messageHandlerOptions": {
        "autoComplete": true,
        "maxConcurrentCalls": 32,
        "maxAutoRenewDuration": "00:05:00"
      },
      "batchOptions": {
        "maxMessageCount": 100,
        "operationTimeout": "00:01:00",
        "autoComplete": true
      }
    }
  }
}

After using this code , I can see that service bus trigger is picking up messages in a batch of 100 (or sometimes < 100) based on the maxMessageCount but I have also observed that most of the messages are ending up in the dead letter queue with the MaxDeliveryCountExceeded reason code. I have tried with different values of MaxDeliveryCount from 10-20 but I had the same result. So my question is do we need to adjust/optimize the MaxDeliveryCount in case of batch processing of service bus messages ? How both of them are related ? What kind of change can be done in the configuration to avoid this dead letter issue ?

Niladri
  • 5,832
  • 2
  • 23
  • 41
  • 1
    `MaxDeliveryCountExceeded` is normally because of a failed message processing. Are you certain you are not throwing any errors anywhere in your actual code, like if there is an error calling the API? – Alex AIT Feb 10 '23 at 07:24
  • @AlexAIT yes we are getting 502 error from backend api which is a python web app running in linux app service plan . It is happening if we are sending more than 10 messages in a batch . Is there any limit of maximum no of concurrent http request to a linux app service in Azure? Can we increase that? – Niladri Feb 10 '23 at 08:28
  • @AlexAIT Also we are getting the bellow error in service bus trigger `The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue` , I am using python so there is no option to manually complete the message if I set `autocomplete:false` – Niladri Feb 10 '23 at 08:37
  • 1
    The lock error indicates that you are not processing the whole batch within maxAutoRenewDuration. My guess would be that your API has an issue or is fairly slow. If you replace the api call with a one second delay then I expect no more errors. You then need to tweak the settings and the API. – Alex AIT Feb 10 '23 at 14:34
  • @AlexAIT I am using FastAPI but it is kind of slow I agree as I dont have the setup for parallel request configuration so it takes average 4 second to complete one response from the API to the function app, Should I try to increase `maxAutoRenewDuration` ? Can you elaborate on the 1 second delay , I did not understand it . – Niladri Feb 10 '23 at 16:31
  • 1
    You can also try increasing the duration. If you have a batch of 100 messages with 4 seconds per message, this will be more than the current `maxAutoRenewDuration` of 5 minutes. The 1 second delay was just to prove that the API is the problem. Don't call the API, just simulate it to show that batch processing works. – Alex AIT Feb 10 '23 at 16:38
  • Do you have further questions on the topic? If not, please consider marking the answer as accepted. – Alex AIT Feb 14 '23 at 09:45
  • @AlexAIT Yes thanks . It improved the performance little bit . I dont see messages going into dead letter now . Bu I think I need to enable parallel request on the FastApi – Niladri Feb 15 '23 at 07:57

1 Answers1

1

From what we discussed in the comments, this is what you encounter:

  • Your function app is fetching 100 messages from ServiceBus (prefetchCount) and locking them for a maximum of maxAutoRenewDuration
  • Your function code is processing messages one at a time at a slow rate because of the API you call.
  • By the time you finish a batch of messages (maxMessageCount), the lock already expired which is why you have exceptions and the message gets redelivered again. This eventually causes MaxDeliveryCountExceeded errors.

What can you do to improve this?

  • Reduce maxMessageCount and prefetchCount
  • Increase maxAutoRenewDuration
  • Increase the performance of your API (how to do that would be a different question)
  • Your current code would be much better off by using a "normal" single message trigger instead of the batch trigger

PS: Beware that your function app may scale horizontally if you are running in a consumption plan, further increasing the load on your struggling API.

Alex AIT
  • 17,361
  • 3
  • 36
  • 73