0

I have setup Kubernetes cluster comprising a master and three nodes. I used the following for the setup:
1. kubeadm (1.7.1)
2. kubectl (1.7.1)
3. kubelet (1.7.1)
4. weave (weave-kube-1.6)
5. docker (17.06.0~ce-0~debian)

All the four instances have been setup in Google Cloud and the OS is Debian GNU/Linux 9 (stretch)

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      1/1       Running   0          19m
kube-system   kube-apiserver-master            1/1       Running   0          19m
kube-system   kube-controller-manager-master   1/1       Running   0          19m
kube-system   kube-dns-2425271678-cq9wh        3/3       Running   0          24m
kube-system   kube-proxy-q399p                 1/1       Running   0          24m
kube-system   kube-scheduler-master            1/1       Running   0          19m
kube-system   weave-net-m4bgj                  2/2       Running   0          4m


$ kubectl get nodes
NAME      STATUS     AGE       VERSION
master    Ready      1h        v1.7.1
node1     Ready      6m        v1.7.1
node2     Ready      5m        v1.7.1
node3     Ready      7m        v1.7.1

The apiserver process is running with the following parameters:

root      1148  1101  1 04:38 ?  00:03:38 kube-apiserver 
--experimental-bootstrap-token-auth=true --allow-privileged=true 
--secure-port=6443
--insecure-port=0 --service-cluster-ip-range=10.96.0.0/12 
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname 
--requestheader-username-headers=X-Remote-User 
--authorization-mode=Node,RBAC --advertise-address=10.128.0.2 
--etcd-servers=http://127.0.0.1:2379

I ran the following commands for accessing the dashboard:

$ kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created

But since the dashboard was not accessible, i tried the following commands too although it didn't look quite relevant. Saw it somewhere.

kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

Finally, i came across a link which looked relevant to my issue. I tried but i am getting the following error:

d:\Work>kubectl --kubeconfig=d:\Work\admin.conf proxy -p 80
Starting to serve on 127.0.0.1:80I0719 13:37:13.971200    5680 logs.go:41] http: proxy error: context canceled
I0719 13:37:15.893200    5680 logs.go:41] http: proxy error: dial tcp 124.179.54.120:6443: connectex: No connection could be made
because the target machine actively refused it.

If i do telnet to the master IP (124.179.54.120) from my laptop on port 22, it works but it doesn't work on port 6443. Port 6443 is open on master as i am able to nc on the given master port from my node machine as shown below:

tom@node1:~$ nc -zv 10.128.0.2 6443
master.c.kubernetes-174104.internal [10.128.0.2] 6443 (?) open

On my laptop, firewall is already disabled and i also disabled firewall on master.

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination

In Google Cloud console, i added TCP and UDP port 6443 to ingress requests in Google Cloud firewall's rule but still i am unable to access the dashboard using http://localhost/ui

Master config details: Master config details

Firewall config details:

Firewall config details

UPDATE: Content of d:\Work\admin.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <CA_cert>
    server: https://124.179.54.120:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: <client-cert>
    client-key-data: <client-key>

UPDATE1: From one of the three nodes, i ran the following command:

tom@node1:~$ curl -v http://127.0.0.1:8001
* Rebuilt URL to: http://127.0.0.1:8001/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8001 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8001
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Date: Thu, 20 Jul 2017 06:57:48 GMT
< Content-Length: 0
< Content-Type: text/plain; charset=utf-8
<
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact
Technext
  • 7,887
  • 9
  • 48
  • 76
  • If `kubectl get pods` work, it means you can communicate with `master-node:6443` which means `kubectl proxy`'s refers to something else. Can you paste `d:\Work\admin.conf` here? Also, port 6443 belongs not to the Ingress but `kube-apiserver`. Ingress is usually configured to run on ports 80 and 443. – Eugene Chow Jul 19 '17 at 10:38
  • `kubectl get pods` was executed on master itself. No command ever ran from my local laptop. Also, when i said Ingress, i was referring to Google Container's [firewall rules](https://cloud.google.com/compute/docs/vpc/firewalls). I have shown those whitelisted ports in the firewall snapshot in the post above. Sorry for the confusion. – Technext Jul 19 '17 at 11:09
  • @EugeneChow: Since there's no way to send personal message, i thought of mentioning here. Hundreds of chars were removed before posting the cert. Still, it can definitely happen with someone by mistake. So thanks for the edit and your change definitely looks better. :) – Technext Jul 20 '17 at 03:42
  • Cheers :) Everyone makes mistakes. Anyway, this looks like a GCE issue which I'm not familiar with. That part has to be solved first. Try `kubectl proxy` in the node itself. If it works, solving the "GCE ingress" issue should immediately allow you to proxy from your laptop. – Eugene Chow Jul 20 '17 at 05:22
  • I tried running `kubectl proxy` from one of the node and then did `nc -zv 127.0.0.1 8001`. I get `localhost [127.0.0.1] 8001 (?) open`. Seems it's working fine. Not sure why `curl -Is http://127.0.0.1:8001` command returns `HTTP/1.1 502 Bad Gateway`. – Technext Jul 20 '17 at 06:12
  • Tail the logs of kubelet, kube-proxy, kube-apiserver and then `curl -v http://127.0.0.1:8001`? What does it say? The logs might reveal the error. – Eugene Chow Jul 20 '17 at 06:43
  • I did not notice any change in those logs while running the curl command. Please check UPDATE1 in the post for output of `curl -v http://127.0.0.1:8001`. – Technext Jul 20 '17 at 08:31
  • It doesn't look like you've setup a service anywhere for the dashboard. I'm new to k8s but it's my understanding that without a service the pod won't be exposed outside the pod's internal network. What happens if you do `kubectl get svc -n kube-system`? Do you have a service listed there called kubernetes-dashboard of type ClusterIP? It's the ClusterIP that you want to tunnel to or proxy to I think. – Gareth Oates Jun 04 '18 at 19:00
  • Thanks for the reply @Gareth but i'm sorry because i will not be able to verify it immediately. I've scrapped the setup as the client was using Rancher for setting up Kubernetes. I might visit this again sometime later. – Technext Jun 05 '18 at 03:12

1 Answers1

2

By default the kubectl proxy only accepts incoming connections from localhost and both ipv4 and ipv6 loopback addresses.
Try to set the --accept-hosts='.*' when running the proxy, so it starts accepting connections from any address.
You might also need to set the --address flag to a public IP, because the default value is 127.0.0.1.

More details in the kubectl proxy docs.

Toresan
  • 331
  • 3
  • 8
  • Thanks for the reply @Toresan but i'm sorry because i will not be able to verify it immediately. I've scrapped the setup as the client is using Rancher for setting up Kubernetes. If i happen to set up the manual way later, i'll revisit and try your suggestion. Thanks for taking out time to reply. – Technext Aug 11 '17 at 10:23
  • After days of searching, this is the first thing that I've found on the entire fcking internet that actually works to allow a browser on a remote machine to access the dashboard running on a server somewhere else. Amazing that the seemingly most common use case is completely undocumented. Thanks! – Brent212 Feb 09 '18 at 20:21