30

Using the Container Optimized OS (COS) on Google Cloud Compute, what's the best way to access the credentials of the default service account for the VM-project from within a Docker container?

$ gcloud compute instances create test-instance \
  --image=cos-stable --image-project=cos-cloud

$ ssh (ip of the above)
# gcloud ...
Command not found

# docker run -ti google/cloud-sdk:alpine /bin/sh
# gcloud auth activate-service-account
... --key-file: Must be specified.

If the credentials were on the VM, then Docker could just mount those. Ordinarily credentials would be in .config/gcloud/, and do this with docker run -v ~/.config/gcloud:~/.config/gcloud image. It is not apparent if and where such credentials are available in Container OS, particularly since it is missing gcloud.

Failing the credentials being on the VM and mountable, options would seem to include:

  1. Put the credentials in the container metadata / environment variable;
  2. Create a .json credentials file for the service account, then
    1. upload it to the VM, then mount it; or
    2. add the .json to the container;
  3. Run a Docker container (e.g. cloud-sdk-docker) that obtains the credentials and shares them with the hosts via e.g. a shared mount partition. Ideally this would be with gcloud auth activate-service-account

Is there a canonical or best-practices way to provide a Docker container with the service account credentials of the VM's project?

Google Cloud already has a security-policy model, the desired one: a VM inside a project should have the access provided by the service account. To avoid complexity and the possibility of misconfiguration or mishap, the correct solution would employ this existing security model i.e. not involving creating, downloading, distributing, and maintaining credential files.

It feels like this would be a routine problem that would need to be solved with COS, Docker, and Kubernetes, so I assume I've missed something straightforward — but the solution was not apparent to me from the docs.

EDIT — Noting the set-service-account API — this question could be reduced to "How do you use the set-service-account API with Container OS?" If it's not possible (because Container OS lacks gcloud and gsutil), I think this should be noted so VM users can plan accordingly.

EDIT For the next folks to cross this:

To replicate the issue, I used:

[local] $ ssh test-instance-ip
[test-instance] $ docker run -it gcr.io/google-appengine/python /bin/bash
[test-instance] $ pip install --upgrade google-cloud-datastore
[test-instance] $ python

>>> from google.cloud import datastore
>>> datastore_client = datastore.Client()
>>> q = datastore.query.Query(datastore_client, kind='MODEL-KIND')
>>> list(q.fetch())
[... results]

The issue was indeed scopes set in the API for the VM instance, and in particular the datastore API was disabled for the default account (Under the heading Cloud API access scopes for the VM). One can find the scopes and the necessary datastore line as follows:

> gcloud compute instances describe test-instance
...
serviceAccounts:
- email: *****-compute@developer.gserviceaccount.com
  scopes:
  - https://www.googleapis.com/auth/datastore
  ...
...

Note that the service account itself had permission to the datastore (so the datastore could be accessed with a json credential key for the service key, generally). The service account permissions were limited by the scopes of the VM.

Brian M. Hunt
  • 81,008
  • 74
  • 230
  • 343

3 Answers3

24

The usual way to authenticate would be the one appearing on Google cloud SDK Docker readme.

From within the COS instance run this once:

docker run -ti --name gcloud-config google/cloud-sdk gcloud auth login

This will store your credentials in the gcloud-config container volume.

This volume should only mounted with containers you want to have access to your credentials, which probably won't be anything that's not cloud-sdk

docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk:alpine gcloud compute instances create test-docker --project [PROJECT]  


Created [https://www.googleapis.com/compute/v1/projects/project/zones/us-east1-b/instances/test-docker].
NAME         ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
test-docker  us-east1-b  n1-standard-1               10.142.0.8   X.X.X.X  RUNNING

Service accounts are usually meant to use their own set of credentials which they have to get from somewhere, be a key file, and environment variable or a token:

gcloud auth activate-service-account

If you want gcloud (and other tools in the Cloud SDK) to use service account credentials to make requests, use this command to import these credentials from a file that contains a private authorization key, and activate them for use in gcloud. This command serves the same function as gcloud auth login but for using a service account rather than your Google user credentials.

Also, the best practice is to create different service accounts for different instances, not to get the key of the default service account and use it:

In general, Google recommends that each instance that needs to call a Google API should run as a service account with the minimum permissions necessary for that instance to do its job. In practice, this means you should configure service accounts for your instances with the following process:

1 - Create a new service account rather than using the Compute Engine default service account.
2 - Grant IAM roles to that service account for only the resources that it needs.
3 - Configure the instance to run as that service account.
4 - Grant the instance the https://www.googleapis.com/auth/cloud-platform scope.
5 - Avoid granting more access than necessary and regularly check your service account permissions to make sure they are up-to-date.

UPDATE

I'm not sure set-service-account does what you need/want. With it you can change the service account that an instance uses (the instance must be stopped though, so you can't use that to change the service account from withing the instance being changed). However you can use it normally for other instances, see:

jordim@cos ~ $ docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk:alpine gcloud compute instances set-service-account instance-1 --service-account xx-compute@developer.gserviceaccount.com
Did you mean zone [us-east1-b] for instance: [instance-1] (Y/n)?  

Updated [https://www.googleapis.com/compute/v1/projects/XX/zones/us-east1-b/instances/instance-1].
Tux
  • 2,039
  • 8
  • 22
  • 4
    Thanks Jordi. Here are a couple points. 1. `gcloud auth login` will grant personal credentials, which must be avoided. 2. Google provides the [set-service-account API](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes) that is clearly meant to determine the credentials — but what's the point of that API with Container OS if those credentials are not available? I'll update the question to note this. Grateful for your assistance. – Brian M. Hunt May 14 '18 at 16:26
  • 4
    Hi Jordi, thanks for the update. So what I want is for Docker containers to inherit the permissions/service account of their instance. – Brian M. Hunt May 16 '18 at 16:05
  • Bot don't they already? Check that your ContainerOS instance has the necessary scopes and then run a google/cloud-sdk:alpine container interactively ( docker run -it google/cloud-sdk:alpine /bin/bash ). Once inside create an instance: " gcloud compute instances create test " If the service account on your base instance has the right permissions and scopes to do that you will see how the container does too. https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#why_use_service_accounts – Tux May 17 '18 at 07:39
  • When running `gcloud` I get "Insufficient Permission"; when using the datastore API I get `google.api_core.exceptions.Forbidden: 403 Request had insufficient authentication scopes.` . The service account in the metadata has "edit access to all resources". – Brian M. Hunt May 17 '18 at 12:18
  • This is related to the instance scopes, not the service account permissions. Set the instance scope to "allow full access to all cloud APIs" and try again. Here is how: https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes – Tux May 17 '18 at 12:48
  • @TarunLalwani I've given the reward based on the helpful effort here (and the timer runs out in an hour), but haven't had a chance to test this yet — am going to do that today. Will mark this as the correct answer if it works, otherwise I'll comment / update the question. Will be back soon to test. – Brian M. Hunt May 20 '18 at 11:23
  • @TarunLalwani Jordi's advice indeed fixed the issue, namely it was the scope of the service account access that prevented datastore access. I added a few more details in the question. – Brian M. Hunt May 20 '18 at 17:04
3

I think this issue is not completely valid in today's date. Therefore, i would like to share my 2 cents.

In case of Container optimized OS, if VM is running with default service account, then same gets auto configured inside cloud-sdk container.

user@instance-1 ~ $ docker run -it gcr.io/google.com/cloudsdktool/cloud-sdk:alpine /bin/bash
bash-5.1# gcloud config list
[component_manager]
disable_update_check = true
[core]
account = *************-compute@developer.gserviceaccount.com
disable_usage_reporting = true
project = my-project-id
[metrics]
environment = github_docker_image

Your active configuration is: [default]
bash-5.1# gcloud compute instances list
NAME        ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP  STATUS
instance-1  us-central1-a  e2-medium                  10.128.0.3   34.**.**.***  RUNNING

Hence, one do not need to perform gcloud auth login and one can directly execute all the gcloud commands provided the default service account has the permissions and the VM has enabled the specific apis explicitly.

However, this usecase is valid if the VM is running with no service account option selected during VM creation.

Manish Bansal
  • 2,400
  • 2
  • 21
  • 37
0

I think this is related Compute engine configuration. DevopsTux already said

This is related to the instance scopes, 
not the service account permissions. 
Set the instance scope to "allow full access to all cloud APIs" 
and try again. Here is how: cloud.google.com/compute/docs/access/… – 

DevopsTux May 17, 2018 at 12:48
  1. stop the instance.
  2. change access scopes like below.

enter image description here

Doosik Bae
  • 71
  • 4