0

We have our setup with our own iptable rules which are being applied by a systemd process, and which blocks external traffic to our public interface. We setup RKE2 with both:

node-ip: <private address in 172.16.0.0/12 range>
node-external-ip: <private address in 172.16.0.0/12 range>

We use NGINX as an ingress, before upgrading from kubernetes 1.22 to 1.25 this worked fine and incoming traffic on our ingresses was only possible from our private network. After the upgrade, RKE2 now seems to open the firewall on the public interface as well.

I tried configuring flannel with this config:

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-canal
  namespace: kube-system
spec:
  valuesContent: |-
    flannel:
      iface: "eth1"

Where eth1 is the interface of our private network. But it does not seem to help, it is specifically the nginx ingress ports which are opened to the world. But I cannot find much information on how to configure this differently on the RKE2 website.

How can I make sure our RKE2 cluster only opens those ports on our private network interface?

samikroon
  • 11
  • 3

0 Answers0