1

I am using a chamber, a tool for managing secrets.

Basically, it populates the environment with the secrets from the specified services and executes the given command.

Eg: chamber exec script.sh

Will use the env vars defined on chamber inside script.sh.

I want to do the same with google cloud service account, in order to use Google SQL Proxy.

Problem is, GOOGLE_APPLICATION_CREDENTIALS env var is a path to a JSON file and not the actual value.

I can easily store the JSON on chamber, but I can't use it as an env var unless I copy the value into a JSON file and pass the path to the env var.

For security reasons, I don't want to store the JSON file inside my production instance.

I know that I can use gcloud auth login to authenticate the user, but I didn't want to install unnecessary libraries.

I could also use a token:

./cloud_sql_proxy -instances=INSTANCE_NAME -token=TOKEN_VALUE

The problem is that the token expires really quickly.

If I could convert the service account JSON file into a base64 string and use it as a token it would be perfect.

Long story short:

I would like to store the google cloud auth secrets on my management secrets tool and use it with the cloud_sql_proxy.

My code is running on aws ec2.

The naive approach that I can think of would be adding the JSON file, authenticate, and then delete the file... I am using packer to generate the instance image.

Any idea how can I achieve this?

soltex
  • 2,993
  • 1
  • 18
  • 29
  • Read the documentation for ADC (Application Default Credentials). When your code is running inside Google Cloud, you do not need a service account JSON key file. Credentials are provided by the metadata service. – John Hanley Mar 23 '21 at 15:40
  • @JohnHanley Yes, you are right, but my code is running on AWS ec2. – soltex Mar 23 '21 at 17:01
  • 1
    Look into Google Workload Identity Federation. This allows you to use an AWS user or role to impersonate a Google service account. https://cloud.google.com/iam/docs/access-resources-aws – John Hanley Mar 23 '21 at 18:17
  • That sounds great! It might be the solution I am looking for. It will take me a while to digest all that information. I will come back to you as soon as I've tested this approach. Thank you for your help! :) – soltex Mar 23 '21 at 18:35
  • 1
    I have deployed this several times with Azure -> Google. I also wrote Terraform to deploy everything for both the Azure and Google sides. It appears overly complex at the beginning but once you understand the basics, it is easy to deploy and use. – John Hanley Mar 23 '21 at 18:44
  • Yeah, it looks a little bit complex. I am also using terraform. Should I go straight to the terraform configuration, or it is a good idea to do it manually so I can understand it better? – soltex Mar 23 '21 at 19:36
  • The answer depends. I prefer to understand the low-level details so that I have a better chance of solving problems later and to be able to implement new features as Google releases them. However, I found using Terraform much easier to deploy. Create a new question asking for Terraform code for the Google side and I will post my Terraform code as the answer. Sometimes it is easier to start with something that works and then learn the details. – John Hanley Mar 23 '21 at 19:46
  • That makes sense. I will do it manually then, I like to understand how everything works. I really appreciate your help, and I agree with you, it is easier to start with something, but if I write a post asking for a code without showing anything everyone will downvote for sure! Just point me in the right direction. Is this a good start? https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_workload_identity_pool – soltex Mar 23 '21 at 19:58
  • 1
    That link is good for the Terraform side. Understanding the arguments is the detail that you will learn going thru the manual process. This Terraform resource is also required. https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_workload_identity_pool_provider – John Hanley Mar 23 '21 at 20:15
  • 1
    Perfect! It is time to dive into the documentation and learn! I will keep you updated. Once again, thank you for your time, really appreciated it! :) – soltex Mar 23 '21 at 20:24
  • 1
    Tip: This does require an organization. I recommend creating a new project to test in. That way you can delete the project to start over again. – John Hanley Mar 23 '21 at 20:29
  • Yeah. I have a few projects to test this. Actually, I had a similar problem to what I described in this question. Basically, I couldn't access the Google SQL database through my Google Compute Engine instances. I ended up adding the IP addresses of the instances to the database networking configuration using terraform. I am not sure If I can somehow do the same with aws ec2 instances. Anyway, I found out, with your help, that there is an alternative if the instances are on google servers, using the internal api/metadata service and cloud_sql_proxy. – soltex Mar 23 '21 at 20:38
  • 2
    One last tip. Enable the IAM, Resource Manager, Service Account Credentials, and Security Token Service (STS) APIs for the project. – John Hanley Mar 23 '21 at 20:48

0 Answers0