57

I have a kubernetes cluster running on azure. What is the way to access the cluster from local kubectl command. I referred to here but on the kubernetes master node there is no kube config file. Also, kubectl config view results in

apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
Phagun Baya
  • 2,127
  • 1
  • 18
  • 27

9 Answers9

98

Found a way to access remote kubernetes cluster without ssh'ing to one of the nodes in cluster. You need to edit ~/.kube/config file as below :

apiVersion: v1 
clusters:    
- cluster:
    server: http://<master-ip>:<port>
  name: test 
contexts:
- context:
    cluster: test
    user: test
  name: test

Then set context by executing:

kubectl config use-context test

After this you should be able to interact with the cluster.

Note : To add certification and key use following link : http://kubernetes.io/docs/user-guide/kubeconfig-file/

Alternately, you can also try following command

kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1
kubectl config use-context test-cluster
Afshin Mehrabani
  • 33,262
  • 29
  • 136
  • 201
Phagun Baya
  • 2,127
  • 1
  • 18
  • 27
  • 1
    How do I do this for GKE? Where do I get the key and certs from? Cloud shell uses gcloud for set up – harpratap Sep 06 '18 at 06:23
  • 3
    kubectl config view --minify --flatten -With this command I got the config information of the remote cluster. Than I copied this information to ~/.kube/config file for my local user and it worked. – cah1r Apr 16 '19 at 06:42
  • @harpratap For GKE, `gcloud` CLI command handles the above setup of kubectl. You can set the context by `gcloud container clusters get-credentials `. See also the official document: "Configuring cluster access for kubectl" https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#generate_kubeconfig_entry – shuuji3 May 10 '20 at 07:21
17

You can also define the filepath of kubeconfig by passing in --kubeconfig parameter.

For example, copy ~/.kube/config of the remote Kubernetes host to your local project's ~/myproject/.kube/config. In ~/myproject you can then list the pods of the remote Kubernetes server by running kubectl get pods --kubeconfig ./.kube/config.

Do notice that when copying the values from the remote Kubernetes server simple kubectl config view won't be sufficient, as it won't display the secrets of the config file. Instead, you have to do something like cat ~/.kube/config or use scp to get the full file contents.

See: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/

jhaavist
  • 681
  • 7
  • 11
  • Doing this for GKE doesn't work because it relies on gcloud to get credentials – harpratap Sep 06 '18 at 06:21
  • See the above comment: https://stackoverflow.com/questions/36306904/configure-kubectl-command-to-access-remote-kubernetes-cluster-on-azure#comment109152230_36403838 – shuuji3 May 10 '20 at 07:22
15

For anyone landing into this question, az cli solves the problem.

az aks get-credentials --name MyManagedCluster --resource-group MyResourceGroup

This will merge the Azure context in your local .kube\config (in case you have a connection already set up, mine was C:\Users\[user]\.kube\config) and switch to the Azure Kubernetes Service connection.

Reference

pollirrata
  • 5,188
  • 2
  • 32
  • 50
10

Locate the .kube directory on your k8s machine.
On linux/Unix it will be at /root/.kube
On windows it will be at C:/User/<username>/.kube
Copy the config file from the .kube folder of the k8s cluster to .kube folder of your local machine
Copy client-certificate: /etc/cfc/conf/kubecfg.crt
client-key: /etc/cfc/conf/kubecfg.key
to .kube folder of your local machine.
Edit the config file in the .kube folder of your local machine and update the path of the kubecfg.crt and kubecfg.key on your local machine.
/etc/cfc/conf/kubecfg.crt --> C:\Users\<username>\.kube\kubecfg.crt
/etc/cfc/conf/kubecfg.key --> C:\Users\<username>\.kube\kubecfg.key

Now you should be able to interact with the cluster. Run 'kubectl get pods' and you will see the pods on the k8s cluster.

AdHorger
  • 470
  • 6
  • 13
Gajendra D Ambi
  • 3,832
  • 26
  • 30
  • This probably the simplest solution for minikube setup – asyncwait Aug 31 '19 at 17:55
  • Used this as a starting point for connecting to a Microk8s cluster where the location of the config and required certificates can be found in the [docs](https://microk8s.io/docs/ports#heading--auth). – ChThy Nov 29 '22 at 18:24
0

How did you set up your cluster? To access the cluster remotely you need a kubeconfig file (it looks like you don't have one) and the setup scripts generate a local kubeconfig file as part of the cluster deployment process (because otherwise the cluster you just deployed isn't usable). If someone else deployed the cluster, you should follow the instructions on the page you linked to to get a copy of the required client credentials to connect to the cluster.

Robert Bailey
  • 17,866
  • 3
  • 50
  • 58
  • I'm using following link to bring up a cluster http://kubernetes.io/docs/getting-started-guides/coreos/azure/ but cannot find pointer to kubeconfig file. – Phagun Baya Mar 31 '16 at 12:15
  • 1
    Reading through the example, it looks like the walkthrough has you run the `kubectl` commands from on the master node. Can you log into the master and try running the commands there? – Robert Bailey Mar 31 '16 at 16:49
  • Yes I can access cluster from master node. But I need help in accessing it remotely (without ssh'ing to master node). – Phagun Baya Apr 01 '16 at 13:17
  • I've never personally tried the Azure guides, so I'm flying a bit blind here but you should first check to see if your apiserver is exposed to the internet. It should be listening on port 443 (with TLS) to all incoming connections. Can you reach that externally (even if it rejects your request)? If not, your first step will be to set that up. Next you need to see what kinds of authentication your apiserver accepts. Then you can follow the steps in the document that your question links to to manually generate a kubeconfig file. – Robert Bailey Apr 01 '16 at 16:33
0

The Azure setup only exposes the ssh ports externally. This can be found under ./output/kube_xxxxxxxxxx_ssh_conf What I did is tunnel the ssh to be available on my machine by adding a ssh port tunnel. Go into the above file and under the "host *" section add another line like the bellow:

LocalForward 8080 127.0.0.1:8080

which maps my local machine port 8080 (where kubectl search for the default context) to the remote machine 8080 port where the master listen to api calls. when you open ssh to the kube-00 as regular docs shows to can now do calls from your local kubectl without any extra configuration.

aofry
  • 1
0

I was trying to setup kubectl on a different client from the one I created the kops cluster originally from. Not sure if this would work on Azure, but it worked on an AWS-backed (kops) cluster:

kops / kubectl - how do i import state created on a another server?

Geremy
  • 2,415
  • 1
  • 23
  • 27
0

For clusters that are created manually using vm's of cloud providers, just get the kubeconfig from ~/.kube/config. However for managed services like GKE you will have to rely on gcloud to get the kubeconfig generated in the runtime with the right token.

Generally a service account can be created that will help in getting the right kubeconfig with token generated for you. Something similar can also be found in Azure.

Santosh
  • 19
  • 2
0

if you have windows check you %HOME% environment variable and it should point to you user directory. Then create the folfer ".kube" in "C:/users/your_user" and within such folder create your "config" file as described by "Phagun Baya".

echo %HOME%