19

When deploying a docker container image to Cloud Run, I can choose a region, which is fine. Cloud Run delegates the build to Cloud Build, which apparently creates two buckets to make this happen. The unexpected behavior is that buckets aren't created in the region of the Cloud Run deployment, and instead default to multi-region US.

How do I specify the region as "us-east1" so the cost of storage is absorbed by the "always free" tier? (Apparently US multi-region storage buckets store data in regions outside of the free tier limits, which resulted in a surprise bill - I am trying to avoid that bill.)

If it matters, I am also using Firebase in this project. I created the Firebase default storage bucket in the us-east1 region with the hopes that it might also become the default for other buckets, but this is not so. The final bucket list looks like this, where you can see the two buckets created automatically with the undesirable multi-region setting.

enter image description here

This is the shell script I'm using to build and deploy:

#!/bin/sh

project_id=$1
service_id=$2

if [ -z "$project_id" ]; then
    echo "First argument must be the Google Cloud project ID" >&2
    exit 1
fi

if [ -z "$service_id" ]; then
    echo "Second argument must be the Cloud Run app name" >&2
    exit 1
fi

echo "Deploying $service_id to $project_id"

tag="gcr.io/$project_id/$service_id"

gcloud builds submit \
    --project "$project_id" \
    --tag "$tag" \
&& \
gcloud run deploy "$service_id" \
    --project "$project_id" \
    --image "$tag" \
    --platform managed \
    --update-env-vars "GOOGLE_CLOUD_PROJECT=$project_id" \
    --region us-central1 \
    --allow-unauthenticated
Doug Stevenson
  • 297,357
  • 32
  • 422
  • 441
  • 1
    I think this is a duplicate of https://stackoverflow.com/questions/51595900/run-google-cloud-build-in-a-specific-region-and-zone. You should still be able to email cloud-build-contact@google.com to get access to the early-access program. – Dustin Ingram Apr 02 '20 at 00:59
  • Is not at all, actually the question is about in which region or zone the artifacts are being stored. – Puteri Apr 02 '20 at 01:10
  • @DustinIngram This is just about the region of the stored artifacts. I don't care where the computing resources are that handle the build, or even how they work. I'm just running gcloud commands to build and deploy. I've edited the question to be specific about that. – Doug Stevenson Apr 02 '20 at 02:07
  • @FernandoRV Yes, this is just about the artifacts. I see some instructions out there about using yaml files that let you specify a container registry, but this seems like overkill, and there doesn't seem to be any simple gcloud CLI options that talk about how these buckets are managed. – Doug Stevenson Apr 02 '20 at 02:10
  • Gotcha, sorry I misread! – Dustin Ingram Apr 02 '20 at 02:16

2 Answers2

10

As you mention, Cloud Build creates a bucket or buckets with multi region because when creating the service in Cloud Run, there are only added the needed flags and arguments to deploy the service.

The documentation for the command gcloud builds submit mentions the following for the flag --gcs-source-staging-dir:

--gcs-source-staging-dir=GCS_SOURCE_STAGING_DIR

A directory in Google Cloud Storage to copy the source used for staging the build. If the specified bucket does not exist, Cloud Build will create one. If you don't set this field, gs://[PROJECT_ID]_cloudbuild/source is used.

As this flag is not set, the bucket is created in multi-region and in us. This behavior also applies for the flag --gcs-log-dir.

Now the necessary steps to use the bucket in the dual-region, region or multi-region you want is using a cloudbuild.yaml and using the flag --gcs-source-staging-dir. You can do the following:

  1. Create a bucket in the region, dual-region or multi-region you may want. For example I created a bucket called "example-bucket" in australia-southeast1.
  2. Create a cloudbuild.yaml file. This is necessary to store the artifacts of the build in the bucket you want as mentioned here. An example is as follows:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
    args:
    - 'run'
    - 'deploy'
    - 'cloudrunservice'
    - '--image'
    - 'gcr.io/PROJECT_ID/IMAGE'
    - '--region'
    - 'REGION_TO_DEPLOY'
    - '--platform'
    - 'managed'
    - '--allow-unauthenticated'
artifacts:
    objects:
    location: 'gs://example-bucket'
    paths: ['*']
  1. Finally you could run the following command:
gcloud builds submit --gcs-source-staging-dir="gs://example-bucket/cloudbuild-custom" --config cloudbuild.yaml

The steps mentioned before can adapted to your script. Please give a try :) and you will see that even if the Cloud Run service is deployed in Asia, Europe or US, the bucket specified before can be in another location.

Puteri
  • 3,348
  • 4
  • 12
  • 27
  • OK, perfect. I scanned through that man page and totally missed the best stuff. It sounds like creation of a yaml file is required here? What if I manually created the buckets with the expected names ahead of time? It looks like two buckets were created in my case - "staging" and "artifacts". – Doug Stevenson Apr 02 '20 at 02:59
  • Fun fact: I deleted the two buckets, recreated the _cloudbuild suffixed one OK, but couldn't recrate artifacts because it required domain name verification. But a new deploy made the bucket again (us multi-regional). Is it possible to specify the target bucket without requiring a yaml file? – Doug Stevenson Apr 02 '20 at 03:31
  • Yes, actually it is mentioned in the [Bucket and object naming guidelines](https://cloud.google.com/storage/docs/naming#requirements) that names containing dots require verification as they are part of the `appspot.com` domain. The buckets created in multi-region seems to be part of the logic of GCP to create the buckets nearest of the resources location. For example, if I deploy and image that is in `eu.gcr.io`, a bucket will be created as multi-region in `eu`. This behavior is similar if the image is in `gcr.io` or `asia.gcr.io` with their respective multi-region bucket. – Puteri Apr 02 '20 at 03:55
  • So, in conclusion for now it is not possible without a `yaml` file to set the `artifacts` bucket and can be a [feature request](https://issuetracker.google.com/issues/new?component=190802&template=0) for Cloud Build to have the option to set the artifacts location using a flag. Using a flag is only possible for the `staging` bucket. – Puteri Apr 02 '20 at 04:14
  • Forgot to mention that the [PROJECT_ID]_cloudbuild bucket can be created in any location and Cloud Build will use the bucket without problems. – Puteri Apr 02 '20 at 04:14
4

Looks like this is only possible by doing what you're mentioning in the comments:

  1. Create a storage bucket in us-east1 as the source bucket ($SOURCE_BUCKET);
  2. Create a Artifact Registry repo in us-east1;
  3. Create the following cloudbuild.yaml:
    steps:
    - name: 'gcr.io/cloud-builders/docker'
      args: ['build', '-t', 'us-east1-docker.pkg.dev/$PROJECT_ID/my-repo/my-image', '.']
    images:
    - 'us-east1-docker.pkg.dev/$PROJECT_ID/my-repo/my-image'
    
  4. Deploy with:
    $ gcloud builds submit --config cloudbuild.yaml --gcs-source-staging-dir=gs://$SOURCE_BUCKET/source
    

More details here: https://cloud.google.com/artifact-registry/docs/configure-cloud-build

I think it should at least be possible to specify the Artifact Registry repo with the --tag option and have it be automatically created, but it currently rejects any domain that isn't gcr.io outright.

Dustin Ingram
  • 20,502
  • 7
  • 59
  • 82
  • And I can't create the default artifact registry bucket because it contains a dot and requires domain name verification. Was hoping to avoid yaml altogether and just take defaults. – Doug Stevenson Apr 02 '20 at 03:40
  • When I tried this, it did not create a second bucket at all since the `cloudbuild.yaml` is not configured to upload the build artifacts, only the resulting image (which doesn't go in a GCS bucket apparently). – Dustin Ingram Apr 02 '20 at 05:29