8

I have setup an ingress for an application but want to whitelist my ip address. So I created this Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/whitelist-source-range: ${MY_IP}/32
  name: ${INGRESS_NAME}
spec:
  rules:
  - host: ${DNS_NAME}
    http:
      paths:
      - backend:
          serviceName: ${SVC_NAME}
          servicePort: ${SVC_PORT}
  tls:
  - hosts:
    - ${DNS_NAME}
    secretName: tls-secret

But when I try to access it I get a 403 forbidden and in the nginx logging I see a client ip but that is from one of the cluster nodes and not my home ip.

I also created a configmap with this configuration:

data:
  use-forwarded-headers: "true"

In the nginx.conf in the container I can see that has been correctly passed on/ configured, but I still get a 403 forbidden with still only the client ip from cluster node.

I am running on an AKS cluster and the nginx ingress controller is behind an Azure loadbalancer. The nginx ingress controller svc is exposed as type loadbalancer and locks in on the nodeport opened by the svc.

Do I need to configure something else within Nginx?

bramvdk
  • 1,347
  • 4
  • 21
  • 31

2 Answers2

8

If you've installed nginx-ingress with the Helm chart, you can simply configure your values.yaml file with controller.service.externalTrafficPolicy: Local, which I believe will apply to all of your Services. Otherwise, you can configure specific Services with service.spec.externalTrafficPolicy: Local to achieve the same effect on those specific Services.

Here are some resources to further your understanding:

Jackie Luc
  • 109
  • 6
  • 3
    I feel the latter might not work. If we just enable it on the application's svc and not on the nginx-ingress's svc, the ip of the node would still get forwarded to the svc instead of the realip. – Pramod Setlur Jan 29 '21 at 02:08
  • 2
    Tried both the use-forwarded-headers in the configmap and the "externalTrafficPolicy" in the nginx ingress (quay ingress-controller 0.30) on an Oracle Cloud, neither worked for me – tuxErrante Mar 09 '21 at 12:24
  • 5
    Doing this on specific services didn't work for me. Setting the externalTrafficPolicy: Local on the nginx-ingress-controller SERVICE (not deployment or config) made everything magically work. Even ClusterIP services now get the correct headers. – Phil Apr 21 '21 at 18:08
  • Wish I could upvote myself. 20 months later and I have run in to this again. And just found that editing my load balancer service magically fixes everything after about a minute. Why isn't this the default setting? – Phil Dec 24 '22 at 23:34
1

It sounds like you have your Nginx Ingress Controller behind a NodePort (or LoadBalancer) Service, or rather behind a kube-proxy. Generally to get your controller to see the raw connecting IP you will need to deploy it using a hostNetwork port so it listens directly to incoming traffic.

coderanger
  • 52,400
  • 4
  • 52
  • 75
  • Hi, yeah sorry forgot to mention that. Edited the question. My nginx controller is exposed as type loadbalancer and is behind an azure loadbalancer indeed which had lb rules forwarding to the nodeports opened by the svc. – bramvdk Apr 01 '20 at 11:00
  • Sorry tried to edit coderangers suggestion but wanted to edited my own. Want to add that Kube-proxy is used by default in AKS – bramvdk Apr 01 '20 at 12:44