I had similar problem this days and root cause of problem is that in cluster deployment (3 nodes), kuberenetes dashboard pod was up on the slave (non master) node.
Issue is that proxy is provided only locally (security reasons),
so dashboard console could not be started on master, neither node 3!
On master node browser error (kubectl proxy & executed on this node):
"http: proxy error: dial tcp 10.32.0.2:8001: connect: connection refused"
On slave node error (kubectl proxy & executed on this node):
"http: proxy error: dial tcp [::1]:8080: connect: connection refused"
Solution:
Status of the cluster pods showed that dashboard pod kubernetes-dashboard-7b544877d5-lj4xq
was on node 3:
namespace kubernetes-dashboard
pod kubernetes-dashboard-7b544877d5-lj4xq
node pb-kn-node03
[root@PB-KN-Node01 ~]# kubectl get pods --all-namespaces -o wide|more <br>
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES <br>
kube-system coredns-66bff467f8-ph7cc 1/1 Running 1 3d17h 10.32.0.3 pb-kn-node01 <none> <none> <br>
kube-system coredns-66bff467f8-x22cv 1/1 Running 1 3d17h 10.32.0.2 pb-kn-node01 <none> <none> <br>
kube-system etcd-pb-kn-node01 1/1 Running 2 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system kube-apiserver-pb-kn-node01 1/1 Running 2 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system kube-controller-manager-pb-kn-node01 1/1 Running 3 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system kube-proxy-4ngd2 1/1 Running 2 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system kube-proxy-7qvbj 1/1 Running 0 3d12h 10.13.40.202 pb-kn-node02 <none> <none> <br>
kube-system kube-proxy-fgrcp 1/1 Running 0 3d12h 10.13.40.203 pb-kn-node03 <none> <none> <br>
kube-system kube-scheduler-pb-kn-node01 1/1 Running 3 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system weave-net-fm2kd 2/2 Running 5 3d12h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system weave-net-l6rmw 2/2 Running 1 3d12h 10.13.40.203 pb-kn-node03 <none> <none> <br>
kube-system weave-net-r56xk 2/2 Running 1 3d12h 10.13.40.202 pb-kn-node02 <none> <none> <br>
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-v2gqp 1/1 Running 0 2d22h 10.40.0.1 pb-kn-node02 <none> <none>
kubernetes-dashboard kubernetes-dashboard-7b544877d5-lj4xq 1/1 Running 15 2d22h 10.32.0.2 pb-kn-node03 <none> <none>
So all active global pods are reallocated from node 3 (included dashboard)
to master node after draining the node
[root@PB-KN-Node01 ~]# kubectl drain --delete-local-data --ignore-daemonsets pb-kn-node03
node/pb-kn-node03 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-fgrcp, kube-system/weave-net-l6rmw
node/pb-kn-node03 drained
After 2 miniutes ...
[root@PB-KN-Node01 ~]# kubectl get pods --all-namespaces -o wide|more <br>
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES <br>
kube-system coredns-66bff467f8-ph7cc 1/1 Running 1 3d17h 10.32.0.3 pb-kn-node01 <none> <none> <br>
kube-system coredns-66bff467f8-x22cv 1/1 Running 1 3d17h 10.32.0.2 pb-kn-node01 <none> <none> <br>
kube-system etcd-pb-kn-node01 1/1 Running 2 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system kube-apiserver-pb-kn-node01 1/1 Running 2 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system kube-controller-manager-pb-kn-node01 1/1 Running 3 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system kube-proxy-4ngd2 1/1 Running 2 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system kube-proxy-7qvbj 1/1 Running 0 3d12h 10.13.40.202 pb-kn-node02 <none> <none> <br>
kube-system kube-proxy-fgrcp 1/1 Running 0 3d12h 10.13.40.203 pb-kn-node03 <none> <none> <br>
kube-system kube-scheduler-pb-kn-node01 1/1 Running 3 3d17h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system weave-net-fm2kd 2/2 Running 5 3d12h 10.13.40.201 pb-kn-node01 <none> <none> <br>
kube-system weave-net-l6rmw 2/2 Running 1 3d12h 10.13.40.203 pb-kn-node03 <none> <none> <br>
kube-system weave-net-r56xk 2/2 Running 1 3d12h 10.13.40.202 pb-kn-node02 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-v2gqp 1/1 Running 0 2d22h 10.40.0.1 pb-kn-node02 <none> <none> <br>
<b>kubernetes-dashboard kubernetes-dashboard-7b544877d5-8ln2n 1/1 Running 0 89s 10.32.0.4 pb-kn-node01 <none> <none> </b><br>
And the problem was solved, kubernetes dashboard was available on master node.