we have the following setup on our openshift-environment:
Application A in namespace B which is exposed via a route and the following configuration regarding load-balancing:
haproxy.router.openshift.io/balance: roundrobin
haproxy.router.openshift.io/disable_cookies: 'True'
Application C in namespace D (same cluster) which is called by application A via an (internal) service.
Currently we are experiencing good load-balancing for the pods of application A which means the pods have roughly the same cpu/memory usage and requests/second.
However, application C which is called via the service shows (for example for two pods) average requests/second in a proportion of 1:2. The only information in the documentation I could find is that under normal circumstances kubernetes should load-balance this kind of internal service calls.
sessionAffinity: None
is also set.
We are using Openshift v3.11.161 (Kubernetes v1.11)
. Is there anything I'm not aware of?
EDIT: Service object:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"test","template":"test-template"},"name":"test-service","namespace":"namespace"},"spec":{"ports":[{"name":"8080-tcp","port":8080,"protocol":"TCP","targetPort":8080}],"selector":{"deploymentconfig":"test-dc"},"sessionAffinity":"None"}}
creationTimestamp: 'timestamp'
labels:
app: test
template: test-template
name:test-service
namespace: namespace
resourceVersion: '73875211'
selfLink: /test/test
uid: uid
spec:
clusterIP: x.y.z
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
deploymentconfig: test
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Kind regards