I have a monorepo set up and a cloudbuild.yaml
file in the root of my repository spins off child cloud build jobs in the first step:
# Trigger builds for all packages in the repository.
- name: "gcr.io/cloud-builders/gcloud"
entrypoint: "bash"
args: [
"./scripts/cloudbuild/build-all.sh",
# Child builds don't have the git context, so pass them the SHORT_SHA.
"--substitutions=_TAG=$SHORT_SHA",
]
timeout: 1200s # 20 minutes
The build all script is something I copied from the community builders repo:
#!/usr/bin/env bash
DIR_NAME="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
set -e # Sets mode to exit on an error, without executing the remaining commands.
for d in {packages,ops/helm,ops/pulumi}/*/; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit . --config=${config} $*
) &
done
wait
It waits until all child builds are done before continuing to the next one... handy!
Only problem is, if any of the child builds fail, it will still continue to the next step.
Is there a way to make this step fail if any of the child builds fail? I guess my script isn't returning the correct error code...?