3

Our system differentiates between what we're calling 'single events' and 'bulk events'. Single events are the result of user interactions and arrive just a few at a time, now and then. Bulk events are the result of actions by an admin and can contain hundreds of events that arrive at essentially the same time.

All the events arrive at a lambda that detects the bulk events and sends them into an SNS FIFO topic. Next is an SQS FIFO queue that's subscribed to the SNS, followed by a second lambda that's triggered by the SQS. The second lambda sends the bulk events into the system. We need the bulk events to enter the system slowly so that we don't exceed the API rate limit of a third-party downstream system.

We're experimenting with batch size on the lambda SQS trigger and setting the reserved concurrency on the second lambda to a low value, but it appears we need to slow things down quite a bit more. We're considering adding a wait timer to the first lambda so that it would insert a 5-10 second delay before each individual event being sent into SNS. That should more dramatically spread the events out in time, and it seems like it might work, but isn't a very satisfying approach.

Are there other options for slowing down the throughput? Any ideas would be appreciated. Thanks.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
dhollinden
  • 43
  • 3
  • 1
    Your options are to either use AWS Step Functions, which can introduce a delay without incurring Lambda charges while waiting, or do your batch processing on an Amazon EC2 instance and have it sleep between pulling requests from SQS. Small EC2 instances are very low-cost and if you have a large volume of bulk requests, could be a better option than using an AWS Lambda function. – John Rotenstein Sep 29 '21 at 09:29
  • Thank you, John. I'll read up on Step Functions. That sounds like a good lead. The entire project is Serverless, so I'll probably stick to lambdas instead of EC2. – dhollinden Oct 01 '21 at 13:57

0 Answers0