2

I have installed ingress controller via helm as a daemonset. I have configured the ingress as follows:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress
  namespace: rcc
  annotations:
    haproxy.org/check: 'true'
    haproxy.org/check-http: /serviceCheck
    haproxy.org/check-interval: 5s
    haproxy.org/cookie-persistence: SERVERID
    haproxy.org/forwarded-for: 'true'
    haproxy.org/load-balance: leastconn
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: webapp-frontend
                port:
                  number: 8080
kubectl get ingress -n rcc
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME            CLASS    HOSTS                    ADDRESS          PORTS   AGE
webapp-ingress   <none>   example.com   10.110.186.170   80      11h

The type chosen was loadbalancer. I can ping from any node the ip address of the ingress on port 80 also can curl it just fine. I can also browse any of the ingress pods ip address from the node just fine. But when I browse the node ip o port 80 I get connection refused. Anything that I am missing here?

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
zozo6015
  • 557
  • 2
  • 11
  • 27
  • is your cluster in a managed cloud service? tell us a little more! – Ronald Carvalho Dec 07 '21 at 03:04
  • It is a self hosted cluster via kubeadm. Using cri-o as engine. – zozo6015 Dec 07 '21 at 03:08
  • What is the `type` of haproxy ingress service? `kubectl get svc -n haproxy_namespace`. Most likely it's a `NodePort` then you have to access your ingress by `node_IP:NodePort` because port `80` is exposed inside the cluster (while clusters built using `kubeadm` have this connectivity from nodes to inside the cluster set up). Another option is to use `metallb` (see [here](https://metallb.universe.tf/installation/)) + change haproxy service `type` to `loadbalancer`. – moonkotte Dec 07 '21 at 10:05

1 Answers1

2

I installed last haproxy ingress which is 0.13.4 version using helm.

By default it's installed with LoadBalancer service type:

$ kubectl get svc -n ingress-haproxy

NAME              TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
haproxy-ingress   LoadBalancer   10.102.166.149   <pending>     80:30312/TCP,443:32524/TCP   3m45s

Since I have the same kubeadm cluster, EXTERNAL-IP will be pending. And as you correctly mentioned in question, CLUSTER-IP is accessible on the nodes when cluster is set up using kubeadm.


There are two options how to access your ingress:

  1. Using NodePort:

From output above there's a NodePort 30312 for internally exposed port 80. Therefore from outside the cluster it should be accessed by Node_IP:NodePort:

curl NODE_IP:30312 -IH "Host: example.com"
HTTP/1.1 200 OK
  1. Set up metallb:

Follow installation guide and second step is to configure metallb. I use layer 2. Be careful to assign not used ip range!

After I installed and set up the metallb, my haproxy has EXTERNAL-IP now:

$ kubectl get svc -n ingress-haproxy

NAME              TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
haproxy-ingress   LoadBalancer   10.102.166.149   172.16.1.241   80:30312/TCP,443:32524/TCP   10m

And now I can access ingress by EXTERNAL-IP on port 80:

curl 172.16.1.241 -IH "Host: example.com"
HTTP/1.1 200 OK

Useful to read:

moonkotte
  • 3,661
  • 2
  • 10
  • 25
  • 1
    Cool I am one step closer. I have requested from the vps service provider two ip addresses and configured the metallb, I can see the external-ip configured with one of the external ip's. But from inside the master node I can browse but from outside it's not working. But the error message is a timeout now not a rejection like before. – zozo6015 Dec 07 '21 at 13:10
  • At this point it becomes a network configuration question. Seems like it's correctly configured on the master node (also are other nodes in the same subnet? Does it work fine?) Since you mentioned "vps service provider", I tend to think it's about cloud providers. Therefore check [metallb on cloud platforms](https://metallb.universe.tf/installation/clouds/). Ideally you should use a loadbalancer which cloud can provide. – moonkotte Dec 07 '21 at 15:47
  • @zozo6015 If it's a local network team/department then I suppose they should take care of routing and everything else outside the VM/target network/subnet. – moonkotte Dec 07 '21 at 16:39
  • it is ovh and it does not provide loadbalancer type of service. Also the k8s cluster is built via kubeadm on vps's not on hosting service kubernetes platform. – zozo6015 Dec 07 '21 at 20:36
  • First time I hear about this cloud provider. I understand that it's not a managed k8s service. To me this is a different question from original and [another question should be asked](https://meta.stackexchange.com/q/39223). So I'd ask cloud provider first to troubleshoot and make sure traffic actually goes to the node. E.g. install a simple nginx server and make sure traffic can reach it. Then it's a network setup of `metallb` + your cloud provider. Unfortunately I'm not a network engineer to help with this part. – moonkotte Dec 08 '21 at 09:20
  • Thanks for your suggestions. So far it looks like the request does not reach the metallb from outside. And honestly the cloud provider is kind of useless in this case. Therefore even tough it was a good learning curve I might drop this whole ingest project and keep working with NodePort and external Haproxy. – zozo6015 Dec 08 '21 at 09:27
  • @zozo6015 Actually, there's another option which is to use `HostNetwork`. But it has its downsides. Please see [nginx ingress - hostnetwork](https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network) as an example of what and how + downsides. The same can work for [haproxy](https://haproxy-ingress.github.io/docs/getting-started/#installation) - see step 4. – moonkotte Dec 08 '21 at 09:27