2

we have the following setup on our openshift-environment:

Application A in namespace B which is exposed via a route and the following configuration regarding load-balancing:

haproxy.router.openshift.io/balance: roundrobin
haproxy.router.openshift.io/disable_cookies: 'True'

Application C in namespace D (same cluster) which is called by application A via an (internal) service.

Currently we are experiencing good load-balancing for the pods of application A which means the pods have roughly the same cpu/memory usage and requests/second.

However, application C which is called via the service shows (for example for two pods) average requests/second in a proportion of 1:2. The only information in the documentation I could find is that under normal circumstances kubernetes should load-balance this kind of internal service calls.

sessionAffinity: None

is also set.

We are using Openshift v3.11.161 (Kubernetes v1.11). Is there anything I'm not aware of?

EDIT: Service object:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"test","template":"test-template"},"name":"test-service","namespace":"namespace"},"spec":{"ports":[{"name":"8080-tcp","port":8080,"protocol":"TCP","targetPort":8080}],"selector":{"deploymentconfig":"test-dc"},"sessionAffinity":"None"}}
  creationTimestamp: 'timestamp'
  labels:
    app: test
    template: test-template
  name:test-service
  namespace: namespace
  resourceVersion: '73875211'
  selfLink: /test/test
  uid: uid
spec:
  clusterIP: x.y.z
  ports:
    - name: 8080-tcp
      port: 8080
      protocol: TCP
      targetPort: 8080
  selector:
    deploymentconfig: test
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Kind regards

Christian
  • 821
  • 1
  • 11
  • 26
  • 2
    Try a _lowercase_ `true`: https://stackoverflow.com/a/54553331/6758654. If that doesn't help, try posting the whole Service and Route objects. – Will Gordon May 03 '20 at 17:04
  • I added the service object. The route (to expose Application A) is working fine regarding the loadbalancing. We are having trouble with the service responsible for the calls from Application A to Application B. – Christian May 03 '20 at 17:22
  • What client library are you using? What can often be seen is that the HTTP connection is being kept alive by the library used to do the requests. For example, the standard Go library does use keepalive pretty aggressively, same with HTTP2 connections. Can you also reproduce the same unfair load balancing by using a curl loop from one of the Pods? – Simon May 13 '20 at 17:04
  • Its a simple SpringBoot App with the standard REST templates. However we switched to a route instead of the service which allows us to configure round-robin which is working out fine for us. Disadvantage of course a small network overhead. – Christian May 13 '20 at 20:21

0 Answers0