1

I have created the kubernetes cluster on AWS EC2 using kubeadm, I can see all the nodes connected and my deployment and service also working. Even when I expose my deployment I can access it from outside the cluster but when I tries to access the kubernetes api from outside or locally I get the error

"User "system:anonymous" cannot get at the cluster scope."

My cluster info shows this :

Kubernetes master is running at https://172.31.25.217:6443 
KubeDNS is running at https://172.31.25.217:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns

172.31.25.217 is the local IP of the cluster

I am using the latest version of kubectl and kubeadm

kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
ubuntu@ip-172-31-25-217:/etc/kubernetes/manifests$ kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Even if I try to run kubectl proxy and access the dashboard from outside the cluster on the IP : http://MASTER_IP:8001/ui, I am unable to do that and it displays connection refused.

What is the trick I am missing ? Can anyone help me ?

Kubectl Config View :`

kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://172.31.17.145:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

`

Anshul Jindal
  • 368
  • 2
  • 17
  • from your master server copy admin.conf to your desktop. and run 'kubectl --kubeconfig=./admin.conf proxy -p 80 ' then try to access http://localhost/ui/ – sfgroups Jun 13 '17 at 13:25
  • I tried that, still not working! same issue. The problem is I can access the dashboard UI locally but not from outside the cluster – Anshul Jindal Jun 13 '17 at 13:27
  • My main problem is the First issue because If I start the heapster and grafana it works on the same 6443 port and gives me the same "User "system:anonymous" cannot get at the cluster scope." error! – Anshul Jindal Jun 13 '17 at 13:32
  • looks like it may be related to firewall issue. did you open the firewall port for kubeapi process? – sfgroups Jun 13 '17 at 13:35
  • can you tell me how to do that ? The ec2 instance has all the ports enabled to be accessed from anywhere. – Anshul Jindal Jun 13 '17 at 13:38
  • usually api server run on port 8080. I have open port in aws. this may help you. https://stackoverflow.com/questions/17161345/how-to-open-a-web-server-port-on-ec2-instance – sfgroups Jun 13 '17 at 14:22
  • I have opened all the ports in EC2 already but still the same issue not able to access! – Anshul Jindal Jun 13 '17 at 14:25
  • I have added kubectl config view in the post! – Anshul Jindal Jun 13 '17 at 14:28

1 Answers1

0

I am able to solve my problem of Dashboard not accessible from outside cluster using the command:

kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='^*$'
Anshul Jindal
  • 368
  • 2
  • 17
  • I haven't tried kube in EC2. in your admin.conf file, can you check correct ip address listed there. 'server:' – sfgroups Jun 14 '17 at 01:45