I have an AWS Lambda Function set up as a trigger to an SQS queue. As part of my Lambda logic i'm updating a record in my PostgreSQL DB using node-postgres.
The function is as followed.
export const handler = async (event: SQSEvent): Promise<void> => {
try {
const user: User = JSON.parse(event.Records[0].body) as User;
console.log('Updating user ', user.id);
const client = new pg.Client({
user: dbUsername,
host: dbHost,
database: dbName,
password: dbPassword,
port: dbPort,
ssl: true,
});
await client.connect();
await client.query('UPDATE users SET last_login = $1 WHERE id = $2', [
new Date().toISOString(),
user.id,
]);
await client.end();
console.log('User updated');
} catch (error) {
console.error(error);
throw error;
}
};
When this lambda is triggered by my SQS, it seems to stop execution during the PG client to database interaction. In my CloudWatch logs I can see the log of Updating user <user-id>
and then the function exits right away, no timeout or error logs.
My PostgreSQL DB is hosted using RDS in a VPC and the Lambda function also exists in the same VPC. The PG config values are being provided by environment variables and i've double checked these are getting set to the correct values.
My initial thought was that a promise wasn't waiting to resolve, though if this was the case i'd expect to see the last console log User updated
in my CloudWatch logs.
I ran the same code locally and it updated the DB record with no issues. Is there something I could be missing in AWS that might cause the lambda toexit early? Moreover, if there is a genuine error occurring somewhere, why is the Lambda function exiting silently and not logging anything in CloudWatch?