4

Given a scenario where I have two Kubernetes clusters, one hosted on AWS EKS and the other on another cloud provider, I would like to manage the EKS cluster from the other cloud provider. What's the easiest way to authenticate such that I can do this?

Would it be reasonable to generate a kubeconfig, where I embed the result from aws get-token (or something like that) to the cluster on the other cloud provider? Or are these tokens not persistent?

Any help or guidance would be appreciated!

Jay K.
  • 546
  • 2
  • 10
  • 17

3 Answers3

5

I believe the most correct is the way described in Create a kubeconfig for Amazon EKS

yes, you create kubeconfig with aws eks get-token and later add newly created config to KUBECONFIG environment variable , eg

export KUBECONFIG=$KUBECONFIG:~/.kube/config-aws

or you can add it to .bash_profile for your convenience

echo 'export KUBECONFIG=$KUBECONFIG:~/.kube/config-aws' >> ~/.bash_profile

For detailed steps please refer to provided url.

Vit
  • 7,740
  • 15
  • 40
0

I had this use case where I needed to work with multi-cloud providers.

So I created kubech to deal with that situation and manage multiple clusters simultaneously.

Ahmed AbouZaid
  • 2,151
  • 1
  • 13
  • 9
0

Assuming that you have a linux platform on the second cloud provider, you can use the following command for generating kube config file:

aws eks update-kubeconfig --region <region-code> --name <cluster-name>

You can change the file using --kubeconfig flag.

Ref: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Haribk
  • 131
  • 7