0

When I replicate the application on more than one pod, the web-app will return a Http 504 while using a NGINX load-balancer.

The NGINX load-balancer is outside of the K8 cluster and acts as a reverse-proxy + load-balancer. Hence, NGINX will forward the requests to one node, hosting the web-app container. Important: I don't want the NGINX host to be part of the cluster. (As long as it can be prevented)

upstream website {
             ip_hash;
             server 1.1.1.1:30300;
             #server 2.2.2.2:30300;
}

server {
    listen                          443 ssl http2;
    server_name                     example.com;

    location / {
            proxy_pass http://website;

            proxy_cache off;
            proxy_buffering off;

            proxy_read_timeout 1d;
            proxy_connect_timeout 4;
            proxy_send_timeout 1d;

            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $http_connection;

            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

This config does work, if, and only if, the app has been published to the 1.1.1.1 node only. If I replicate the web-app to 2.2.2.2 as well, the snippet above will already lead to a 504, even thought 2.2.2.2 is still commented out. Commenting the 2.2.2.2 in, won't change anything.

As far as I understood, the NodePort is a public-available port, mapping to an internal port. (Called port) Hence, NodePort 30300 will be forwarded to 2000, which is also my targetport the web-app listens on. Upon replication the second pod will also host the web-app (+ microservices) and expose itself to NodePort 30300. So we do have two NodePorts 30300 within our k8 network and I guess this might lead to confusion and routing issues.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: swiper-web
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: swiper

  template:
    metadata:
      labels:
        app: swiper
    spec:
      containers:
 
      - name: swiper-web-app-example
        image: docker.example.com/swiper.web.app.webapp:$(Build.BuildId)
        ports:
        - containerPort: 2000
        resources:
          limits:
            memory: "2.2G"
            cpu: "0.6"


      - name: swiper-web-api-oauth
        image: docker.example.com/swiper.web.api.oauth:$(Build.BuildId)
        ports:
        - containerPort: 2010
        resources:
          limits:
            memory: "100M"
            cpu: "0.1"

      imagePullSecrets:
      - name: regcred

      dnsPolicy: "None"
      dnsConfig:
        nameservers:
        - 8.8.8.8

---

apiVersion: v1
kind: Service
metadata:
  name: swiper-web-service
  namespace: default
spec:
  type: NodePort
  selector:
    app: swiper
  ports:
  - name: swiper-web-app-example
    port: 2000
    nodePort: 30300

  - name: swiper-web-api-oauth
    port: 2010

enter image description here

enter image description here

enter image description here

Edit:

Adding externalTrafficPolicy: Local to the swiper-web-service solves the issue. Both endpoints are now reachable. But the load-balacing of the other microservices is now disabled.

Bin4ry
  • 652
  • 9
  • 34

2 Answers2

2

The issue was quite simple. The application uses SignalR to fetch data on demand. Each data-request could end up on a different node, leading to a borken connection state. (HTTP 504/502) The swiper-web-service was missing the sessionAffinity config. Adjusting the swiper-web-service to following fixes the issue.

apiVersion: v1
kind: Service
metadata:
  name: swiper-web-service
  namespace: default
spec:
  type: NodePort
  selector:
    app: swiper
  ports:
  - name: swiper-web-app-example
    port: 2000
    nodePort: 30300

  - name: swiper-web-api-oauth
    port: 2010

  sessionAffinity: ClientIP
  externalTrafficPolicy: Cluster
Bin4ry
  • 652
  • 9
  • 34
0

No...there will be only one nodePort 30300 on all k8s nodes exposed for your service. How are you replicating your second pod? Are you setting replicas to 2 in deployment or some other way?

Once you set replicas to 2 in deployment. It will provision another pod. Make sure that pod is running on separate node and not all pods are running on same k8s Node.

subudear
  • 231
  • 1
  • 4
  • I set replicas to 2 within the deployment, such that kubectl get service -l provides following information: `NodePort: swiper-web-app-example 30300/TCP Endpoints: 10.244.1.23:2000,10.244.2.21:2000` Each node has one pod with the app instance running. I updated the yaml to avoid further misunderstanding. – Bin4ry Mar 29 '21 at 15:42
  • yes, but only if replicas has been set to 1. If I set it to 2, I get a HTTP 504 error message. To be more clear, if I set it to replicas 1, ofc only one node will be reachable. (1.1.1.1 or 2.2.2.2) – Bin4ry Mar 29 '21 at 16:03
  • run this command to confirm that service hold both endpoints - kubectl describe svc swiper-web-service – subudear Mar 29 '21 at 16:08
  • The output of the command `kubectl describe svc swiper-web-service` was my first asnwer to your post. `10.244.1.23:2000,10.244.2.21:2000` – Bin4ry Mar 29 '21 at 16:09
  • what is the cluster version? Have you seen or tried the suggestion from this thread - https://stackoverflow.com/questions/46667659/kubernetes-cannot-access-nodeport-from-other-machines – subudear Mar 29 '21 at 16:24
  • The version is 1.20.5. I did add some screenshots of the overall running nodes. The suggested port-forwarding didn't help in my case. – Bin4ry Mar 29 '21 at 16:28