When enabling only egress network policies, all readiness and liveness checks fail after pods are restarted.
This is what I see when describing the pod:
Warning Unhealthy 115s (x7 over 2m55s) kubelet, Readiness probe failed: Get http://10.202.158.105:80/health/ready: dial tcp 10.202.158.105:80: connect: connection refused Warning Unhealthy 115s (x7 over 2m55s) kubelet, Liveness probe failed: Get http://10.202.158.105:80/health/live: dial tcp 10.202.158.105:80: connect: connection refused
Immediately, if I disable the policies, the health checks will resume functioning. If the pod is already healthy before applying the network policies, it will continue to work.
I've also tried to whitelist every namespace with this policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-all
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 8080
I'm having a hard time finding any guidance on how to resolve this. Is there an egress policy that would need enabled to allow kubelet to monitor the pods health checks?
The pod is running inside of Azure Kubernetes Services and using Calico networking.