I have a lambda function that reads from DynamoDB and creates a large file (~500M) in /tmp that finally uploaded to s3. Once uploaded the lambda clears the file from /tmp (since there is a high probability that the instance may be reused)
This function takes about 1 minute to execute, even if you ignore the latencies.
In this scenario, when i try to invoke the function again, in < 1m, i have no control if i will have enough space to write to /tmp. My function fails.
Questions: 1. What are the known work arounds in these kind of scenario? (Potentially give more space in /tmp or ensure a clean /tmp is given for each new execution) 2. What are the best practices regarding file creation and management in Lambda? 3. Can i attach another EBS or other storage to Lambda for execution ? 4. Is there a way to have file system like access to s3 so that my function instead of using /tmp can write directly to s3?