1

I have spent two days now and I am still not able to figure it out.

The whole deployment is on bare-metal.

For simplicity purposes, I am minimizing the cluster from HA to 1 master node and 2 workers.

$ kubectl get nodes
NAME                          STATUS   ROLES    AGE    VERSION
worker1   Ready    <none>   99m    v1.19.2
worker2   Ready    <none>   99m    v1.19.2
master    Ready    master   127m   v1.19.2

I am running Nginx-ingress but I think this is irrelevant as the same configurations should also apply on HaProxy for example as well.

$ kubectl -n ingress-nginx get pod
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-g645g        0/1     Completed   0          129m
ingress-nginx-admission-patch-ftg7p         0/1     Completed   2          129m
ingress-nginx-controller-587cd59444-cxm7z   1/1     Running     0          129m

I can see that there are no external IPs on the cluster:

$ kubectl get service -A
NAMESPACE                NAME                                 TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                      AGE
cri-o-metrics-exporter   cri-o-metrics-exporter               ClusterIP   192.168.11.163    <none>        80/TCP                       129m
default                  kubernetes                           ClusterIP   192.168.0.1       <none>        443/TCP                      130m
ingress-nginx            ingress-nginx-controller             NodePort    192.168.30.224    <none>        80:32647/TCP,443:31706/TCP   130m
ingress-nginx            ingress-nginx-controller-admission   ClusterIP   192.168.212.9     <none>        443/TCP                      130m
kube-system              kube-dns                             ClusterIP   192.168.0.10      <none>        53/UDP,53/TCP,9153/TCP       130m
kube-system              metrics-server                       ClusterIP   192.168.178.171   <none>        443/TCP                      129m
kubernetes-dashboard     dashboard-metrics-scraper            ClusterIP   192.168.140.142   <none>        8000/TCP                     129m
kubernetes-dashboard     kubernetes-dashboard                 ClusterIP   192.168.100.126   <none>        443/TCP                      129m

Sample of ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: dashboard-ingress-nginx
  namespace: kubernetes-dashboard
data:
  ssl-certificate: my-cert

Sample of the Ingress conf:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: dashboard-ingress-ssl
  namespace: kubernetes-dashboard
  annotations:
    nginx.ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/whitelist-source-range: 10.96.0.0/16  #the IP to be allowed
  spec:
    tls:
      - hosts:
        - kube.my.domain.internal
        secretName: my-cert
    rules:
      - host: kube.my.domain.internal
        http:
          paths:
          - path: /
            backend:
              serviceName: kubernetes-dashboard
              servicePort: 443

If redirect my browser to the domain e.g. https://kube.my.domain.internal I see 403 forbidden. Is it possible to be due to RBAC rules that I am not able to view the Dashboard?

I have found relevant questions but although that seems the configurations to be working for other users for they do not ingress configuration for dashboard. I also tried to whitelist a big range of IPs as described here Restricting Access By IP (Allow/Block Listing) Using NGINX-Ingress Controller in Kubernetes but still the same result.

Yet I am also not able to understand why Nginx-ingress is only launched on one node when I would expect to be launched on both nodes (workers). I have no labels on any of the nodes.

I also read about the MetalLB Bare-metal considerations but in my case, I am not trying to reach the web outside of the private network I am just trying to reach the nodes from outside the cluster into the cluster. I could be wrong but I do not think that this is needed at this point.

Update: I have managed to launch the dashboard with kubectl proxy as documented in the official page Web UI (Dashboard) but since I want to upgrade my cluster to HA this is not the best solution. If the node where the proxy is running goes down then the Dashboard becomes not accessible.

Update2: After following documentation of metallb/Layer 2 Configuration I got to the following point:

$ kubectl get pods -A -o wide
NAMESPACE                NAME                                           READY   STATUS      RESTARTS   AGE     IP              NODE                          NOMINATED NODE   READINESS GATES
cri-o-metrics-exporter   cri-o-metrics-exporter-77c9cf9746-5xw4d        1/1     Running     0          30m     172.16.9.131    workerNode   <none>           <none>
ingress-nginx            ingress-nginx-admission-create-cz9h9           0/1     Completed   0          31m     172.16.9.132    workerNode   <none>           <none>
ingress-nginx            ingress-nginx-admission-patch-8fkhk            0/1     Completed   2          31m     172.16.9.129    workerNode   <none>           <none>
ingress-nginx            ingress-nginx-controller-8679c5678d-fmc2q      1/1     Running     0          31m     172.16.9.134    workerNode   <none>           <none>
kube-system              calico-kube-controllers-574d679d8c-7jt87       1/1     Running     0          32m     172.16.25.193   masterNode          <none>           <none>
kube-system              calico-node-sf2cn                              1/1     Running     0          9m11s   10.96.95.52     workerNode   <none>           <none>
kube-system              calico-node-zq9vf                              1/1     Running     0          32m     10.96.96.98     masterNode          <none>           <none>
kube-system              coredns-7588b55795-5pg6m                       1/1     Running     0          32m     172.16.25.195   masterNode          <none>           <none>
kube-system              coredns-7588b55795-n8z2p                       1/1     Running     0          32m     172.16.25.194   masterNode          <none>           <none>
kube-system              etcd-masterNode                      1/1     Running     0          32m     10.96.96.98     masterNode          <none>           <none>
kube-system              kube-apiserver-masterNode            1/1     Running     0          32m     10.96.96.98     masterNode          <none>           <none>
kube-system              kube-controller-manager-masterNode   1/1     Running     0          32m     10.96.96.98     masterNode          <none>           <none>
kube-system              kube-proxy-6d5sj                               1/1     Running     0          9m11s   10.96.95.52     workerNode   <none>           <none>
kube-system              kube-proxy-9dfbk                               1/1     Running     0          32m     10.96.96.98     masterNode          <none>           <none>
kube-system              kube-scheduler-masterNode            1/1     Running     0          32m     10.96.96.98     masterNode          <none>           <none>
kube-system              metrics-server-76bb4cfc9f-5tzfh                1/1     Running     0          31m     172.16.9.130    workerNode   <none>           <none>
kubernetes-dashboard     dashboard-metrics-scraper-5f644f6df-8sjsx      1/1     Running     0          31m     172.16.25.197   masterNode          <none>           <none>
kubernetes-dashboard     kubernetes-dashboard-85b6486959-thhnl          1/1     Running     0          31m     172.16.25.196   masterNode          <none>           <none>
metallb-system           controller-56f5f66c6f-5vvhf                    1/1     Running     0          31m     172.16.9.133    workerNode   <none>           <none>
metallb-system           speaker-n5gxx                                  1/1     Running     0          31m     10.96.96.98     masterNode          <none>           <none>
metallb-system           speaker-n9x9v                                  1/1     Running     0          8m51s   10.96.95.52     workerNode   <none>           <none>
$ kubectl get service -A
NAMESPACE                NAME                                 TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                      AGE
cri-o-metrics-exporter   cri-o-metrics-exporter               ClusterIP   192.168.74.27     <none>        80/TCP                       31m
default                  kubernetes                           ClusterIP   192.168.0.1       <none>        443/TCP                      33m
ingress-nginx            ingress-nginx-controller             NodePort    192.168.201.230   <none>        80:30509/TCP,443:31554/TCP   32m
ingress-nginx            ingress-nginx-controller-admission   ClusterIP   192.168.166.218   <none>        443/TCP                      32m
kube-system              kube-dns                             ClusterIP   192.168.0.10      <none>        53/UDP,53/TCP,9153/TCP       32m
kube-system              metrics-server                       ClusterIP   192.168.7.75      <none>        443/TCP                      31m
kubernetes-dashboard     dashboard-metrics-scraper            ClusterIP   192.168.51.178    <none>        8000/TCP                     31m
kubernetes-dashboard     kubernetes-dashboard                 ClusterIP   192.168.50.70     <none>        443/TCP                      31m

Yet I am not able to see the public IPs so I can reach the cluster through the NAT.

Thanos
  • 1,618
  • 4
  • 28
  • 49
  • How did you setup your cluster? Have you followed any guides when creating and deploying it? – Wytrzymały Wiktor Oct 20 '20 at 08:31
  • Yes. The cluster in as defined here [Bare-metal considerations/Over a NodePort Service](https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service) – Thanos Oct 20 '20 at 10:42

2 Answers2

0

The way I did a BareMetal setup is by installing METALLB together with ingress-nxginx and used NAT to forward the traffic received on my host (ports 80 & 443) to ingress-nginx.

# MetalLB installation

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml

# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f $YAML_FILES/0_cluster_setup_metallb_conf.yaml

0_cluster_setup_metallb_conf.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.204.61.240-10.204.61.250
Mihaimyh
  • 1,262
  • 1
  • 13
  • 35
  • Can you share an example with config? I have found this documentation [metallb/layer-2-configuration](https://metallb.universe.tf/configuration/#layer-2-configuration) but I am not sure about the address pool. In my case I have only two workers where the IP is not sequential. So I cam not sure how to config them. – Thanos Oct 20 '20 at 14:17
  • Added the config on the reply. – Mihaimyh Oct 20 '20 at 15:54
  • I have done more or less the same and I am still not able to see public IPs on my cluster. Can you share the Ingress yaml file and any step earlier than that is was necessary? (see update of content in my question). – Thanos Oct 20 '20 at 16:01
0

Finally after so much time I managed to figured it out. I am a begginer on k8s so the solution might help other beginners.

I decided to launch the ingress on the same namespace where the dashboard is running. The user can choose a different name space just make sure to use the connect your name space with kubernetes-dashboard service. Documentation can be found here Understanding namespaces and DNS.

Complete code of example that will work with NGINX ingress:

apiVersion: networking.k8s.io/v1
      kind: Ingress
      metadata:
        name: kubernetes-dashboard-ingress
        namespace: kubernetes-dashboard
        annotations:
          kubernetes.io/ingress.class: "nginx"
          nginx.ingress.kubernetes.io/ssl-passthrough: "true"
          nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      spec:
        tls:
        - hosts:
            - "dashboard.example.com"
          secretName: kubernetes-dashboard-secret
        rules:
        - host: "dashboard.example.com"
          http:
            paths:
            - path: /
              pathType: Prefix
              backend:
                service:
                  name: kubernetes-dashboard
                  port:
                    number: 443

Remember to follow the annotations of the ingress controller (on the example NGINX) as they might update in future with later releases. For example since version image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 the annotation secure-backends was replaced by backend-protocol.

Also on the example provided the external load balancer configuration has to be SSL Pass-Through.

Update: In case that someone else is a beginner on k8s a minor input that it was not so clear to me is that if the user decides to use the MetalLB you need to specify type: LoadBalancer on the ingress file.

Thanos
  • 1,618
  • 4
  • 28
  • 49