29

when i am trying to test the configuration of kubectl

kubectl get svc 

i am getting this

error: the server doesn't have a resource type "svc"

when I try this command

kubectl get services 

i am getting error:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

and i am following this userguide to deploy a kubernetes application on my mac

https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-create-cluster

Admins-MacBook-Pro:~ Harshin$ kubectl version --short --client
Client Version: v1.10.3
Mateusz Piotrowski
  • 8,029
  • 10
  • 53
  • 79
Harshin_
  • 431
  • 2
  • 5
  • 6

8 Answers8

40

Make a copy of the config file and resolve this issue:

sudo mkdir ~/.kube
sudo cp /etc/kubernetes/admin.conf ~/.kube/

cd ~/.kube

sudo mv admin.conf config
sudo service kubelet restart
u-ways
  • 6,136
  • 5
  • 31
  • 47
Hareesh R
  • 509
  • 4
  • 3
22

you need to specify kubeconfig for kubectl like this.

kubectl --kubeconfig .kube/config  get nodes
sfgroups
  • 18,151
  • 28
  • 132
  • 204
  • 1
    It did work for me, but not sure why. Can you elaborate? – iamnicoj Nov 15 '20 at 01:31
  • @iamnicoj its because you're specifying the config in which you saved the credentials for your target AKS. I experienced the issue on WLS2 which isn't a very native environment for Azure CLI which makes sense for this workaround. Adding `alias k="kubectl --kubeconfig .kube/config"` to your `.bashrc` or `.zshrc` would make it easier for you to navigate. – Mert Alnuaimi Mar 28 '21 at 10:51
  • 1
    I was also able to set this flag value with an environment variable, like this: export KUBECONFIG=".kube/config" – Valerie Parham-Thompson Jul 01 '21 at 19:06
20

I was facing the same issue when I was trying to access an AWS EKS cluster. Here's the command that I had to run and it resolved the issue for me:

aws eks update-kubeconfig --name <EKS_Cluster_Name> --region <Region_Name>
Vishwas M.R
  • 1,341
  • 16
  • 23
9

kubectl consumes an interface exposed by a Container Service(GCP, ACS or AWS) When you receive that error it could be that you didn't configure the authentication to that Container Service(GCp, ACS, AWS, or etc) For example in Google Container Service you can do: gcloud auth login

Finally

gcloud container clusters get-credentials [cluster-name] --zone [cluster-zone]

There will be an output like this one:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for website.

The last line is what we were searching for

Virgilio Lino
  • 91
  • 1
  • 5
  • That's the most proper answer – Peter Jun 07 '20 at 17:07
  • One small issue here- there's an extra "cluster-zone" in the line above, I think I should be: `gcloud container clusters get-credentials [cluster-name] --zone [custer-zone]` where `[cluster-zone]` is something like `us-west1-a` – schimmy Feb 14 '23 at 21:06
2

The problem is that the connection defaults to localhost:8080, just change kubectl.cfg specifying where you desire it to connect to. Another possible problem is the path to it not being set correctly, in the variable KUBECONFIG. Good luck

user113420
  • 21
  • 1
0

It seems like you have not set up the cluster. Run below in master.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
navodissa
  • 55
  • 1
  • 7
0

I got this error in AWS.

You might have not deleted your cluster properly. Go with

eksctl delete cluster --name <clusterName> --wait
Andronicus
  • 25,419
  • 17
  • 47
  • 88
-2

You may have used sudo in the wrong place. Chown your home directory:

sudo chown -R loggedonuser .
Shōgun8
  • 482
  • 10
  • 20