Background
I have an API Gateway endpoint, which proxies to a Lambda function (Lambda A), for my React application to fetch customer data.
This lambda function makes an API call to fetch the customer data, but the format of the response leaves a lot to be desired. So I want to reformat it.
Rather than stuff this reformatting logic into Lambda A, I wrote a separate Lambda function (Lambda B). I need to invoke both of these functions when my API Gateway endpoint is hit, and the output from the first is the input to the second.
First thought: Step Functions
Step functions seemed like a natural fit, but there is a 32kb limit on the size of the data payload that can be passed between stages. Our json blob of customer data often exceeds this.
The only "best practice" I've heard offered for this situation is to write the payload to S3, and just pass the object key to the next stage.
This is fine, but I'm not thrilled about having to write and delete so many short lived objects to S3. There may be dozens or hundreds of thousands of these requests per day. So I've abandoned the step function approach (for now).
Current Approach
I'm currently invoking Lambda B directly from Lambda A using the javascript SDK. This has a fair amount of downside; notably that I'm running two lambdas concurrently at times with no performance benefit. In other words, I'm paying for Lambda A to just sit there and wait on the response from Lambda B (which I'm also paying for).
It feels like an anti-pattern, and I've heard it characterized as such.
The Question(s)
This seems like a relatively common scenario - make an API call (function A), and then execute some additional logic to supplement, reformat, or otherwise modify that response (function B), before passing it back to the caller.
Surely I'm not the first person to want to use two Lambda functions to do something like this.
What are my options for doing this with two lambda functions, assuming I can't use step functions?
Are there other ways to work around Step Functions' 32kb payload size limit besides using S3?
If I'm silly for wanting to avoid the S3/Step Function approach, answers explaining why my concerns are unwarranted would also be welcome.
Edit
Why do you even consider splitting the functionality of fetching the data and processing it into two different AWS Lambda functions?
Imagine that instead of just Lambda A, I have two dozen Lambdas that need to consume the functionality of Lambda B.
So, I package (the functionality of) Lambda B up, publish it to Nexus, and my other two dozen Lambdas all consume it at build time. All my lambdas swell in size, and I have to publish more npm packages as I accumulate more "Lambda B"s. This is what I want to avoid.
I want my "Lambda A"s to consume other lambdas, rather than npm packages, for widely shared functionality. Maybe I am taking the "function" in "lambda function" too literally, or maybe I'm just trying to leverage FaaS to its full potential.