We have a system designed to manage a large number of Kubernetes clusters housed in external customer accounts simultaneously. This system currently works by having the kubeconfig
stored in a database that is queried at runtime, and then passed into the golang kube-client constructor like so:
clientcmd.NewClientConfigFromBytes([]byte(kubeConfigFromDB))
For clusters using basic auth, this "just works".
For EKS clusters, this works as long as both the aws-iam-authenticator
is installed on the machine that is running the golang code such that the kube-client can call out to it for authentication, and correct API_AWS_ACCESS_KEY_ID
and API_AWS_SECRET_ACCESS_KEY
are set within the kubeconfig
's user.exec.env
key.
For GKE clusters, it's not clear what the best practice way of achieving this is, and I have not been able to get it to work yet despite trying a handful of different operations detailed below. The standard practice for generating a kubeconfig
for a GKE cluster is very similar to EKS (detailed here https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl?authuser=1#generate_kubeconfig_entry) which uses gcloud config config-helper
to generate the authentication credentials.
One idea is to use the GOOGLE_APPLICATION_CREDENTIALS
environment variable, however the problem with this is that it is global and thus we cannot have our system simultaneously talk to many remote GKE clusters because each needs a unique set of google credentials to authenticate.
My second idea was to use the --impersonate-service-account
flag provided to gcloud config config-helper
, however it crashes when I run it with the following error:
$ gcloud config config-helper --format=json --impersonate-service-account=acct-with-gke-access@myorg.iam.gserviceaccount.com --project myproject
WARNING: This command is using service account impersonation. All API calls will be executed as [acct-with-gke-access@myorg.iam.gserviceaccount.com].
ERROR: gcloud crashed (AttributeError): 'unicode' object has no attribute 'utcnow'
My final idea is quite complicated. I would get the google-credentials-JSON and put it in the kubeconfig
like so:
user:
auth-provider:
config:
credentials: "<google-credentials-JSON>"
name: my-custom-forked-gcp
And I would create my own copy of https://github.com/kubernetes/client-go/blob/master/plugin/pkg/client/auth/gcp/gcp.go#L156 and replace line 156
ts, err := google.DefaultTokenSource(context.Background(), scopes...)
with
ts, err := tokenSourceFromJSON(context.Background(), gcpConfig["credentials"], scopes...)
Where tokenSourceFromJSON
is a new method that I add that looks like this:
func tokenSourceFromJSON(ctx context.Context, jsonData string, scopes ...string) (oauth2.TokenSource, error) {
creds, err := google.CredentialsFromJSON(ctx, []byte(jsonData), scopes...)
if err != nil {
return nil, err
}
return creds.TokenSource, nil
}
This last idea will probably work (hopefully! I'm working on it now) but it seems like a very complicated solution to a simple problem: to provide the google-credentials-JSON
at runtime to the golang kubernetes client to authenticate using those credentials. Is there an easier way?