7

I have a GKE cluster which uses Nginx Ingress Controller as its ingress engine. Currently, when I setup the Nginx Ingress Controller I define a service kind: LoadBalancer and point it to an external static IP previously reserved on GCP. The problem with that is it only binds to a regional static IP address (L4 Load Balancer if I'm not mistaken). I want to have a Global Load Balancer instead.

I know that I can achieve that by using GKE ingress controller instead of Nginx Ingress Controller. But I still want to use Nginx Ingress due to its powerful annotations like rewriting headers based on conditions etc; things not available for GKE Ingress annotations.

Finally, is there any way to combine a Global Load Balancer with nginx ingress controller or put an Global Load Balancer in front of a L4 Load Balancer created By Nginx?

We need to have Global Load Balancer in order to be protected by Cloud Armor.

Mauricio
  • 2,552
  • 2
  • 29
  • 43
  • Which Cloud Armor features do you need? It is now possible to use Cloud Armor with TCP/SSL proxy for DDoS protection, but it would not provide WAF. – Gari Singh Jun 03 '22 at 08:50
  • What are you using to install NGINX Ingress controller? – Gari Singh Jun 03 '22 at 08:51
  • @GariSingh I use the gke manifest available at kubernetes.github.io page: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml – Mauricio Jun 03 '22 at 14:20
  • @GariSingh. I want DDoS protection. But also SQL Injection and XSS protection features. – Mauricio Jun 03 '22 at 14:25
  • 1
    OK. So you'll definitely need to use HTTP(s) Load Balancer, which means you'll need to set up Ingress for your NGINX controller. I'll post an answer below soon. – Gari Singh Jun 03 '22 at 17:47
  • @GariSingh please do you have any advice or do you agree with Rami H's solution? – rrob Jul 11 '22 at 15:00
  • 1
    Rami H's solution will definitely work. You still end up with two layers of HTTP proxy / load balancing, but it's a pretty clean solution. – Gari Singh Jul 11 '22 at 22:06
  • Ok, thank you, Im going to try it. @Mauricio do you managed to solve it please? – rrob Jul 12 '22 at 07:52
  • 1
    NEGs with LB works but Im not able to include nginx-ingress - I also posted new question on this topic at https://stackoverflow.com/questions/72950423/gcp-external-http-cloud-load-balancer-with-nginx-ingress-on-gke and contacted hodo.dev tutorial guy. Please @GariSingh do you have any advice please, because this is beyond my knowledge. – rrob Jul 12 '22 at 10:01
  • @rrob we had to put this project on hold so didn't actually have the time to try it. – Mauricio Jul 12 '22 at 12:57

2 Answers2

6

I finally managed to make Nginx Ingress Controller and L7 HTTP(S) Load Balancer work together.

Based on the @rrob repply with his own question I managed to make it work. The only difference is that his solution will install a classic HTTP(S) LoadBalancer, instead of the new version and also I cover the creation of the IP Address, the self-signed Certificate, and the HTTP Proxy redirect from HTTP to HTTPS. I will plcae here the detailed steps that worked for me.

This steps assume we already have a Cluster created with VPC-native traffic routing enabled.

Before the need of the HTTP(S) LoadBalancer, I would just apply the manifests provided by the NGINX DOCS page for the installation of the Nginx Ingress Controller and It would create a service of type LoadBalancer which would, then, create a regional L4 LoadBalancer automatically.

But now that I need need to have Cloud Armor and WAF, the L4 Loadbalancer doesn't support it. A HTTPS(S) Load Balancer is needed in order for Cloud Armor to work.

In order to have Nginx Ingress controller working with the new HTTPS(S) LoadBalancer we need to change the type: LoadBalancer on the Nginx Ingress Controller service to ClusterIP instead, and add the NEG annotation to it cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'. With this net annotation, GCP will automatically create a Network Endpoint Group that will point to the Nginx Ingress Controller service running in GKE. This Network Endpoint Group will serve as the backend of our HTTPS Load Balancer.

NOTE: When using the cloud.google.com/neg annotation, GCP will create one Network Endpoint Group for each region containing nodes with Nginx Ingress Controller pods. For exmple, if you set your NodePool to spread across us-central1-a, us-central1-b and us-central1-f, GCP will create three Network Endpoint Groups if you have one Ingress Controller replica in each node. So when you set the Load Balancer's back end, you need to add all of them as backends as It's explained in STEP 5.

apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
  annotations:
    cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'
spec:
  type: ClusterIP
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller

If you install the Nginx Ingress Controller using HELM you need to overwrite the config to add the NEG annotation to the service. So the values.yaml would look something like this:

controller:
  service:
    type: ClusterIP
    annotations:
      cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'

To install it, add the ingress-nginx to the helm repository:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

Then install it:

helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx

The next steps will be:

PROJECT_ID=<project-id>
ZONE=us-central1-a
CLUSTER_NAME=<cluster-name>
HEALTH_CHECK_NAME=nginx-ingress-controller-health-check
NETWORK_NAME=<network-name>
CERTIFICATE_NAME=self-managed-exp-<day>-<month>-<year>
GKE_NODE_METADATA=$(kubectl get nodes -o jsonpath='{.items[0].metadata}')
GKE_SAMPLE_NODE_NAME=$(echo $GKE_NODE_METADATA | jq -r .name)
GKE_SAMPLE_NODE_ZONE=$(echo $GKE_NODE_METADATA | jq -r .labels | jq -r '."topology.kubernetes.io/zone"')
NETWORK_TAGS=$(gcloud compute instances describe \
    $GKE_SAMPLE_NODE_NAME --project $PROJECT_ID \
    --zone=$GKE_SAMPLE_NODE_ZONE --format="value(tags.items[0])")
  1. Create an Static IP Address (skip if you already have):

Has to be Premium tier and Global

gcloud compute addresses create ${CLUSTER_NAME}-loadbalancer-ip \
    --global \
    --ip-version IPV4
  1. Create a Firewall rule allowing the L7 HTTP(S) Load Balancer to access our cluster
gcloud compute firewall-rules create ${CLUSTER_NAME}-allow-tcp-loadbalancer \
    --allow tcp:80 \
    --source-ranges 130.211.0.0/22,35.191.0.0/16 \
    --target-tags $NETWORK_TAGS \
    --network $NETWORK_NAME
  1. Create a Health Check for our to-be-created Backend Service
gcloud compute health-checks create http ${CLUSTER_NAME}-nginx-health-check \
  --port 80 \
  --check-interval 60 \
  --unhealthy-threshold 3 \
  --healthy-threshold 1 \
  --timeout 5 \
  --request-path /healthz
  1. Create a Backend Service which is used to inform the LoadBalancer how to connect and distribute trafic to the pods.
gcloud compute backend-services create ${CLUSTER_NAME}-backend-service \
    --load-balancing-scheme=EXTERNAL \
    --protocol=HTTP \
    --port-name=http \
    --health-checks=${CLUSTER_NAME}-nginx-health-check \
    --global
  1. Now it's the time we add the Nginx NEG service (the one annotated earlier) to the back end service created on the previous step:

As explained earlier, a Network Endpoint Group will be created for each zone that contains Nginx Ingress Controller pods. So the current layout of my setup is:

a. **Node Pool** configured to span across **us-central1-a** and **us-central1-c**.
b. **Nginx Ingress Controller** configured to have 4 replicas.
c. **Node 1** in **us-central1-a** will have 2 replicas.
d. **Node 2** in **us-central1-c** will have 2 replicas.
e. GCP generated two Network Endpoint Groups. One for **us-central1-a** and other for **us-central1-c**.

So for the example bellow, I bind the two NEGs for each zone, as backend services to my Load Balancer.

# us-central1-a
gcloud compute backend-services add-backend ${CLUSTER_NAME}-backend-service \
  --network-endpoint-group=ingress-nginx-80-neg \
  --network-endpoint-group-zone=us-central1-a \
  --balancing-mode=RATE \
  --capacity-scaler=1.0 \
  --max-rate-per-endpoint=100 \
  --global

# us-central1-c
gcloud compute backend-services add-backend ${CLUSTER_NAME}-backend-service \
  --network-endpoint-group=ingress-nginx-80-neg \
  --network-endpoint-group-zone=us-central1-c \
  --balancing-mode=RATE \
  --capacity-scaler=1.0 \
  --max-rate-per-endpoint=100 \
  --global
  
  1. Create the load balancer itself (URL MAPS)
gcloud compute url-maps create ${CLUSTER_NAME}-loadbalancer \
    --default-service ${CLUSTER_NAME}-backend-service

  1. Create a Self Managed Certificate (it may be a Google-managed certificate but here we will cover the self-managed). Can also create from Console HERE.
gcloud compute ssl-certificates create $CERTIFICATE_NAME \
    --certificate=my-cert.pem \
    --private-key=my-privkey.pem \
    --global

Finally, I will setup the Loadbalancer frontend through the Console interface which is way easier.

  1. To create the LoadBalancer front end, enter the Loadbalancer on Console and click on "Edit".

  2. The Frontend configuration tab will be incomplete. Go there image

  3. Click on "ADD FRONTEND IP AND PORT"

  4. Give it a name and select HTTPS on the field Protocol.

  5. On IP Address change from Ephemeral to your previously allocated static IP

  6. Select your certificate and mark Enable HTTP to HTTPS redirect if you want. (I did)image

  7. Save the LoadBalancer. The entering the LoadBalancer page we should see our nginx instance(s) healthy and green. In my case I've setup the Nginx Ingress Controller to have 4 replicas: image

Finally, we just need to point our domains to the LoadBalancer IP and create our Ingress file.

NOTE: The Ingress now won't handle the certificate. The certificate will now be managed by the LoadBalancer externally. So the Ingress won't have the tls definition:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/upstream-fail-timeout: "1200"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      set $http_origin "${scheme}://${host}";
      more_set_headers "server: hide";
      more_set_headers "X-Content-Type-Options: nosniff";
      more_set_headers "Referrer-Policy: strict-origin";
  name: ingress-nginx
  namespace: prod

spec:
  rules:
  - host: app.mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend
            port:
              number: 80
Mauricio
  • 2,552
  • 2
  • 29
  • 43
0

You can create the Nginx as a service of type LoadBalancer and give it a NEG annotation as per this google documentation.

https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing

Then you can use this NEG as a backend service (target) for HTTP(S) load balancing

You can use the gcloud commands from this article

https://hodo.dev/posts/post-27-gcp-using-neg/

Rami H
  • 134
  • 1
  • 5
  • Awesome, thanks for the direction. We are solving the exact same issue. I'm going to try it, would you please have some example or other advice? – rrob Jul 12 '22 at 08:06
  • @Rami H When you say: "You can create the Nginx as a service of type LoadBalancer and give it a NEG annotation". The documentation you provided says you can only use neg annotation on services of type ClusterIP or NodePort. Will it work with type LoadBalancer? – Mauricio Jul 21 '22 at 17:52
  • Just to clarify. This won't work because you can't use NEG with serivice `type: LoadBalancer` – Mauricio Jul 22 '22 at 20:41