I have a node NextJS app that I am containerizing with Docker and then deploying to GCP Cloud Run. While Docker runs without issue locally, on deployment to Cloud Run it runs into an error. When I look at the error logs it looks like it goes through various steps like installing the packages and running a migration without a problem, but then fails unexpectedly with the error message:
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
Uncaught signal: 6, pid=13, tid=13, fault_addr=0.
Uncaught signal: 6, pid=1, tid=1, fault_addr=0.
Container terminated on signal 6.
I find it a bit bizarre to run into that with such a simple application and am wondering what it means and options to prevent this from occurring (Smaller docker size?) or resolve (Expand the node memory?)
Not sure what additional information is needed, but maybe relevant are
Package.json (Scripts):
"scripts": {
"dev": "nodemon server.js",
"build": "next build",
"start": "NODE_ENV=production npm run db:migrate && node server.js",
"test": "jest --watch",
"test:ci": "jest --ci",
"db:migrate": "node_modules/.bin/sequelize db:migrate",
"db:migrate:undo": "node_modules/.bin/sequelize db:migrate:undo",
"db:migrate:undo:all": "node_modules/.bin/sequelize db:migrate:undo:all",
"db:seed": "node_modules/.bin/sequelize db:seed:all",
"db:seed:undo:all": "node_modules/.bin/sequelize db:seed:undo:all"
},
Dockerfile:
version: "3.9"
services:
redis:
image: redis:alpine
database:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: ''
POSTGRES_DB: ${DB_DATABASE}
volumes:
- nextjs_auth_boilerplate:/var/lib/postgresql/data/ # persist data even if container shuts down
app:
image: nextjs-auth-boilerplate
build: .
depends_on:
- redis
- database
command: ["./wait-for-it.sh", "database:5432", "--", "npm", "start"]
ports:
- "3000:3000"
environment:
- REDIS_HOST=redis
- DB_HOSTNAME=database
volumes:
nextjs_auth_boilerplate: