Motive
I want to fully automate the deployment of many services with the help of Google Cloud Build and Google Kubernetes Engine. Those services are located inside a monorepo, which has a folder called services
.
So I created a cloudbuild.yaml
for every service and created a build trigger. The cloudbuild.yaml
does:
- run tests
- build new version of Docker image
- push new Docker image
- apply changes to Kubernetes cluster
Issue
As the number of services increases, the number of build triggers increases, too. There are also more and more services that are built even though they haven't changed.
Thus I want a mechanism, which has only one build trigger and automatically determines which services need to be rebuild.
Example
Suppose I have a monorepo with this file structure:
├── packages
│ ├── enums
│ ├── components
└── services
├── backend
├── frontend
├── admin-dashboard
Then I make some changes in the frontend
service. Since the frontend
and the admin-dashboard
service depend on the components
package multiple services need to be rebuild:
- frontend
- admin-dashboard
But not backend!
What I've Tried
(1) Multiple build triggers
Setting up multiple build triggers for every service. But 80% of those builds are redundant, since most changes in the code are only related to individuals services. It's also increasingly complex to manage many build triggers, which look almost identical. A single cloudbuild.yaml
file looks like this:
steps:
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-f",
"./services/frontend/prod.Dockerfile",
"-t",
"gcr.io/$PROJECT_ID/frontend:$REVISION_ID",
"-t",
"gcr.io/$PROJECT_ID/frontend:latest",
".",
]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/frontend"]
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "-f", "kubernetes/gcp/frontend.yaml"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
(2) Looping through cloudbuild files
This question is about a very similar issue. So I've tried to set up one "entry-point" cloudbuild.yaml
file in the root of the project and looped through all services:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in ./services/*/; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
This would eliminate the need for having multiple build triggers. But I also ran into issues with this method:
Every service is sent into it's own build process with the file scope of this particular service. This means, that I can only access files inside /services/specific-service
during the build. Which is a total bummer for me (I need access to files in parent directories like packages
and config files in the root).
(3) Build only changed services
Since I want a mechanism to only build changed services, I've tried to determine the services that need to be rebuilt. It seems quite easy to do this with the help of lerna. Running
lerna changed --all --parseable
will return a list file paths to the changed packages like this:
/home/username/Desktop/project/packages/components
/home/username/Desktop/project/services/frontend
/home/username/Desktop/project/services/admin-dashboard
However, the list also includes packages
and I have no idea how I would be able to use this list in a script to loop through affected services. Also: when I trigger a build (e.g. through tagging a commit), lerna wouldn't be able to recognize changed packages during the build process as the changes have already been committed.
I know this is a long one. But I think it's an important topic, so I really appreciate any help!
P.S.: This is how my actual project looks like, if you want to take a close look at the specific use-case.