I have a remix run application and am using a cloud sql database with a public IP and Prisma as an ORM. I'm also using Cloud Run to host my application and cloud build for the build process. The issue I'm having is the ability to run migrations in my cloudbuild.yaml file which is used for cloud build. I am having difficulty running a migration script because It's difficult to connect cloud build to cloud sql It currently fails on the migrate step
This is what I get back from the console
Already have image (with digest): node:16
2023/08/11 01:31:12 Authorizing with Application Default Credentials
2023/08/11 01:31:12 [yellowingreen:us-central1:yellow-in-green-1-dev] Listening on /cloudsql/yellowingreen:us-central1:yellow-in-green-1-dev
2023/08/11 01:31:12 The proxy has started successfully and is ready for new connections!
Prisma schema loaded from prisma/schema.prisma
Datasource "db": MySQL database "dev" at "34.171.173.136:3306"
Error: P1001: Can't reach database server at `34.171.173.136`:`3306`
Please make sure your database server is running at `34.171.173.136`:`3306`.
Below is my cloudbuild.yaml file
steps:
# Install Node.js dependencies
- id: npm-install
name: node:16
entrypoint: npm
args: ["install", "--frozen-lockfile"]
timeout: "4000s"
waitFor: ["-"]
# Install Cloud SQL proxy
- id: proxy-install
name: node:16
entrypoint: sh
timeout: "4000s"
args:
- "-c"
- "wget https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.4.0/cloud-sql-proxy.linux.amd64 -O cloud_sql_proxy && chmod +x cloud_sql_proxy"
waitFor: ["npm-install"]
# Migrate database schema to the latest version
# =tcp:3306
- id: migrate
name: node:16
entrypoint: sh
env:
- DATABASE_URL=$_DATABASE_URL
args:
- "-c"
- "mkdir /cloudsql; chmod 777 /cloudsql; (./cloud_sql_proxy --unix-socket /cloudsql $_DATABASE_CONNECTION & sleep 2) && npx prisma migrate deploy"
timeout: "1200s"
waitFor: ["npm-install", "proxy-install"]
- id: build
name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/$_SERVICE_NAME:$COMMIT_SHA", "."]
waitFor: ["migrate"]
- id: push
name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/$_SERVICE_NAME:$COMMIT_SHA"]
waitFor: ["build"]
- id: deploy
name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
[
"run",
"deploy",
"$_SERVICE_NAME",
"--image",
"gcr.io/$PROJECT_ID/$_SERVICE_NAME:$COMMIT_SHA",
"--region",
"$_DEPLOY_REGION",
]
images: ["gcr.io/$PROJECT_ID/$_SERVICE_NAME:$COMMIT_SHA"]
Here is my DOCKERFILE
# # base node image
FROM node:16-bullseye-slim as base
# # Build the dev image
FROM base as build
RUN mkdir /app/
WORKDIR /app/
COPY . /app
RUN npm install
RUN npm run build
# # Get the production modules
FROM base as production-deps
RUN mkdir /app/
WORKDIR /app/
COPY --from=build /app/node_modules /app/node_modules
ADD prisma .
RUN npx prisma generate
ADD package.json package-lock.json .npmrc /app/
RUN npm prune --production
# Pull out the build files and do a production install
FROM base
ENV NODE_ENV=production
RUN mkdir /app/
WORKDIR /app/
ADD package.json package-lock.json .npmrc /app/
COPY --from=build /app/node_modules/.prisma /app/node_modules/.prisma
COPY --from=build /app/public /app/public
COPY --from=build /app/build /app/build
COPY --from=build /app/server.js /app/server.js
COPY --from=build /app/prisma /app/prisma
COPY --from=production-deps /app/node_modules /app/node_modules
# RUN npx prisma migrate deploy
# RUN npx prisma db seed
CMD ["npm", "start"]
I have read over this documentation many times but I find it very confusing - https://cloud.google.com/sql/docs/mysql/connect-build I'm not sure if I need multiple cloudbuild.yaml files or just one if I use the cloud proxy Basically every time I push to my repo - It should trigger a new build and deploy a new migration. But I'm not sure if I should run the migration in the cloudbuild.yaml file or should I be running it in the Dockerfile.
Any help or feedback is appreciated, like I said I have read over the cloud build/cloud sql connection documentation but it's confusing. I feel like this is much more difficult than it should be having to go through all these extras steps.