3

I have deployed ECK on my kubernetes cluster(all vagrant VMs). The cluster has following config.

NAME       STATUS   ROLES                       AGE   VERSION 
kmaster1   Ready    control-plane,master        27d     v1.21.1 
kworker1   Ready    <none>                      27d     v1.21.1
kworker2   Ready    <none>                      27d     v1.21.1

I have also setup a loadbalancer with HAProxy. The loadbalancer config is as following(created my own private cert)

frontend http_front
  bind *:80
  stats uri /haproxy?stats
  default_backend http_back

frontend https_front
  bind *:443 ssl crt /etc/ssl/private/mydomain.pem
  stats uri /haproxy?stats
  default_backend https_back


backend http_back
  balance roundrobin
  server kworker1 172.16.16.201:31953
  server kworker2 172.16.16.202:31953
 

backend https_back
  balance roundrobin
  server kworker1 172.16.16.201:31503 check-ssl ssl verify none
  server kworker2 172.16.16.202:31503 check-ssl ssl verify none

I have also deployed an nginx ingress controller and 31953 is the http port of the nginx controller 31503 is the https port of nginx controller

nginx-ingress    nginx-ingress-controller-service   NodePort    10.103.189.197   <none>        80:31953/TCP,443:31503/TCP   8d    app=nginx-ingress

I am trying to make the kibana dashboard available outside of the cluster on https. It works fine and I can access it within the cluster. However I am unable to access it via the loadbalancer.

Kibana Pod:

default          quickstart-kb-f74c666b9-nnn27              1/1     Running   4          27d   192.168.41.145   kworker1   <none>           <none>

I have mapped the loadbalancer to the host

172.16.16.100   elastic.kubekluster.com

Any request to https://elastic.kubekluster.com results in the following error(logs from nginx ingress controller pod)

 10.0.2.15 - - [20/Jun/2021:17:38:14 +0000] "GET / HTTP/1.1" 502 157 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/06/20 17:38:14 [error] 178#178: *566 upstream prematurely closed connection while reading response header from upstream, client: 10.0.2.15, server: elastic.kubekluster.com, request: "GET / H
TTP/1.1", upstream: "http://192.168.41.145:5601/", host: "elastic.kubekluster.com"

HAproxy logs are following

Jun 20 18:11:45 loadbalancer haproxy[18285]: 172.16.16.1:48662 [20/Jun/2021:18:11:45.782] https_front~ https_back/kworker2 0/0/0/4/4 502 294 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1"

The ingress is as following

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubekluster-elastic-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/default-backend: quickstart-kb-http
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600s"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600s"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600s"
    nginx.ingress.kubernetes.io/proxy-body-size: 20m
spec:
  tls:
    - hosts:
      - elastic.kubekluster.com
  rules:
  - host: elastic.kubekluster.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: quickstart-kb-http
            port: 
              number: 5601

I think the request is not reaching the kibana pod because I don't see any logs in the pod. Also I don't understand why Haproxy is sending the request as HTTP instead of HTTPS. Could you please point to any issues with my configuration?

bluelurker
  • 1,353
  • 3
  • 19
  • 27
  • `proxy-body-size` should be quoted like the timeout. That is not the issue probably, but if you want to take effect it should be quoted. – oz123 Jun 23 '21 at 21:09
  • It works fine and I can access it within the cluster - how are you testing it? Can you access the dashboard with `kubectl port-forward`? – oz123 Jun 23 '21 at 21:12
  • Yes, I added annotations just to increase the timeout. It didn't solve the problem. Adding quotes around body size doesn't make any difference, but I have noted it. For your second question, I exposed the service "quickstart-kb-http" as nodeport and accessed it via https://:. I was redirected to kibana login page and I was able to login after entering credentials. I also forwaded the port to access via service like this: kubectl port-forward service/quickstart-kb-http 8080:5601 and was able to access the dashboard at https[colon]//127.0.0.1:8080 – bluelurker Jun 23 '21 at 21:45
  • I believe the problem is that there is a confusion of protocols here. If the backend is HTTPS then HAProxy should really just pass the traffic through. Alternatively, use SSL in haproxy, but remove everything related to TLS\SSL in the nginx ingress. – oz123 Jun 23 '21 at 22:17

2 Answers2

1

I hope this helps ... Here is how I set a "LoadBalancer" using nginx and forward traffic to HTTPS services:

 kubectl get nodes -o wide 
NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP      OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
asd-master-1   Ready    master   72d   v1.19.8   192.168.1.163   213.95.154.199   Ubuntu 20.04.2 LTS   5.8.0-45-generic   docker://20.10.6
asd-node-1     Ready    <none>   72d   v1.19.8   192.168.1.101   <none>           Ubuntu 20.04.1 LTS   5.8.0-45-generic   docker://19.3.15
asd-node-2     Ready    <none>   72d   v1.19.8   192.168.0.5     <none>           Ubuntu 20.04.1 LTS   5.8.0-45-generic   docker://19.3.15
asd-node-3     Ready    <none>   15d   v1.19.8   192.168.2.190   <none>           Ubuntu 20.04.1 LTS   5.8.0-45-generic   docker://19.3.15

This is the service for nginx:

# kubectl get service -n ingress-nginx
NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.101.161.113   <none>        80:30337/TCP,443:31996/TCP   72d

And this is the LoadBalancer configuration:

# cat /etc/nginx/nginx.conf
... trimmed ...
stream {
    upstream nginx_http {
        least_conn;
        server asd-master-1:30337 max_fails=3 fail_timeout=5s;
        server asd-node-1:30337 max_fails=3 fail_timeout=5s;
        server asd-node-2:30337 max_fails=3 fail_timeout=5s;
    }
    server {
        listen 80;
        proxy_pass nginx_http;
        proxy_protocol on;
    }

    upstream nginx_https {
        least_conn;
        server 192.168.1.163:31996 max_fails=3 fail_timeout=5s;
        server 192.168.1.101:31996 max_fails=3 fail_timeout=5s;
        server 192.168.0.5:31996 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     443;
        proxy_pass nginx_https;
        proxy_protocol on;
    }

}

The relevant part is that I am sending the proxy protocol. You will need to configure nginx ingress (in the configuration map) to accept this, and maybe add the correct syntax to haproxy configuration.

This might be something like:

backend https_back
  balance roundrobin
  server kworker1 172.16.16.201:31503 check-ssl ssl verify none send-proxy-v2
  server kworker2 172.16.16.202:31503 check-ssl ssl verify none send-proxy-v2

Nginx Ingress configuration should be:

# kubectl get configmap -n ingress-nginx  nginx-configuration -o yaml
apiVersion: v1
data:
  use-proxy-protocol: "true"
kind: ConfigMap
metadata:
...

I hope this puts you on the right track.

oz123
  • 27,559
  • 27
  • 125
  • 187
0

Taking a cue from @oz123's answer, I analyzed it a bit more and was finally able to achieve it with the following config.

Loadbalancer config (HAProxy)

Exposed the LB with a bridged network by configuring it in Vagrantfile. Enabled TLS passthrough in Haproxy.

frontend kubernetes-frontend
  bind 192.168.1.23:6443
  mode tcp
  option tcplog
  default_backend kubernetes-backend

backend kubernetes-backend
  mode tcp
  option tcp-check
  balance roundrobin
  server kmaster1 172.16.16.101:6443 check fall 3 rise 2

frontend http_front
  bind *:80
  stats uri /haproxy?stats
  default_backend http_back

frontend https_front
  mode tcp
  bind *:443
  #ssl crt /etc/ssl/private/mydomain.pem
  stats uri /haproxy?stats
  default_backend https_back


backend http_back
  balance roundrobin
  server kworker1 172.16.16.201:32502
  server kworker2 172.16.16.202:32502


backend https_back
  mode tcp
  balance roundrobin
  server kworker1 172.16.16.201:31012
  server kworker2 172.16.16.202:31012
  

Ingress Controller

Created a Nodeport ingress controller service and exposed all internal services(for e.g kibana) via this controller. All other services except ingress-controller are ClusterIP.

apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    helm.sh/chart: ingress-nginx-4.0.15
  name: ingress-nginx-controller
  namespace: ingress-nginx
  resourceVersion: "8198"
  uid: 245a554f-56a8-4bc4-a3dd-19ffc9116a08
spec:
  clusterIP: 10.105.43.200
  clusterIPs:
  - 10.105.43.200
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    nodePort: 32502
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    nodePort: 31012
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

Ingress Resource for Kibana

kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
  generation: 1
  name: ingress-kibana
  namespace: default
spec:
  rules:
  - host: kibana.kubekluster.com
    http:
      paths:
      - backend:
          service:
            name: quickstart-kb-http
            port:
              number: 5601
        path: /
        pathType: Prefix
  tls:
  - secretName: quickstart-kb-http-certs-public

Finally create an entry in /etc/hosts and map LB ip to subdomains and access the kibana console like

https://kibana.kubekluster.com
bluelurker
  • 1,353
  • 3
  • 19
  • 27