3

I am using the NetworkPolicy below to allow egress on HTTP and HTTPS ports, but running wget https://google.com doesn't work when the network policy is applied. The domain name is resolved (DNS egress rule works) but connecting to the external host times out.

I've tried on minikube with cilium and on Azure with azure-npm in case it was some quirk with the network policy controller, but it behaves the same on both. I'm confused since I use the same method for DNS egress (which works) but this fails for other ports.

What's preventing egress on HTTP/HTTPS ports?

Kubernetes version 1.11.5

apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
  name: my-netpolicy
spec:
  egress:
  - ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP
  - ports:
    - port: 443
      protocol: UDP
    - port: 443
      protocol: TCP
    - port: 80
      protocol: UDP
    - port: 80
      protocol: TCP
  podSelector:
    matchLabels:
      my-label: my-app

(Yes, the UDP rules are probably unnecessary, but trying everything here)

(I've also tried wget on a private server in case Google/etc. block Azure IPs, same result)

(I've also tried matching ingress rules because "why not", same result)


kubectl describe on the network policy:

Name:         my-netpolicy
Namespace:    default
Created on:   2019-01-21 19:00:04 +0000 UTC
Labels:       ...
Annotations:  <none>
Spec:
  PodSelector:     ...
  Allowing ingress traffic:
    To Port: 8080/TCP
    From: <any> (traffic not restricted by source)
    ----------
    To Port: https/UDP
    To Port: https/TCP
    To Port: http/TCP
    To Port: http/UDP
    From: <any> (traffic not restricted by source)
  Allowing egress traffic:
    To Port: 53/UDP
    To Port: 53/TCP
    To: <any> (traffic not restricted by source)
    ----------
    To Port: https/UDP
    To Port: https/TCP
    To Port: http/UDP
    To Port: http/TCP
    To: <any> (traffic not restricted by source)
  Policy Types: Ingress, Egress

Minimal reproducible example:

apiVersion: v1
kind: Pod
metadata:
  name: netpolicy-poc-pod
  labels:
    name: netpolicy-poc-pod
spec:
  containers:
  - name: poc
    image: ubuntu:18.04
    command: ["bash", "-c", "while true; do sleep 1000; done"]
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: netpolicy-poc
spec:
  podSelector:
    matchLabels:
      name: netpolicy-poc-pod
  egress:
  - ports:
    - port: 80
      protocol: UDP
    - port: 80
      protocol: TCP
    - port: 443
      protocol: UDP
    - port: 443
      protocol: TCP
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP
  ingress: []

Then:

kubectl exec -it netpolicy-poc /bin/bash
apt update
apt install wget -y
wget https://google.com
Tyler Camp
  • 177
  • 15
  • Can you describe your current `NetworkPolicy`: `kubectl describe networkpolicy my-netpolicy` ? – Nick_Kh Jan 31 '19 at 12:23
  • @mk_sta - I've edited the original post – Tyler Camp Jan 31 '19 at 15:00
  • Seems k8s does not recognize any `egress` rules for 80 and 443 port as per your output – Nick_Kh Feb 01 '19 at 09:47
  • @mk_sta I've also tried egress rules for ports 389, 636, and 3269 for LDAPS, and those rules didn't work either. I'm not sure how to interpret these results or move forward here. – Tyler Camp Feb 01 '19 at 15:34
  • Can you check your current policy Pod configuration: `kubectl get networkpolicy my-netpolicy -o yaml` whether it persists with the appropriate port values? – Nick_Kh Feb 07 '19 at 09:59
  • @mk_sta I had previously also tried with manually entering the correct port values and it didn't have any effect. I've updated the post with a minimal reproducible YAML to use with `kubectl create -f`. – Tyler Camp Feb 07 '19 at 16:25
  • @mk_sta I believe I've found the issue - domain names were resolving to IPv6 IPs, which doesn't seem to be supported by the network policy controllers I've used. `wget http://github.com` gives me a timeout, but `wget http://192.30.253.113` connect successfully. Any thoughts/comments? – Tyler Camp Feb 07 '19 at 16:27
  • Nevermind - may be relevant, but this is the case with cilium in minikube, but requests via IPv4 on Azure k8s has the same failures – Tyler Camp Feb 07 '19 at 16:39
  • Do you have the other network policies implemented which can have an impact on relevant Pods labeled `name: netpolicy-poc-pod`? – Nick_Kh Feb 13 '19 at 09:55
  • No, that's the only network policy affecting that pod (this is also the case when it's the only network policy in the cluster) – Tyler Camp Feb 13 '19 at 20:18
  • Then I would investigate DNS resolution in you cluster, as per this [tutorial](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/) i.e – Nick_Kh Feb 18 '19 at 09:31
  • DNS resolution works fine, connecting on non-DNS ports times out every time. We’re you able to run my sample pod+netpolicy and reproduce the issue? – Tyler Camp Feb 20 '19 at 15:45

1 Answers1

1

Turns out the policy I gave works fine, it's just that the controllers implementing the policy had some bugs. On Minikube+Cilium it just didn't work for IPv6 but worked fine for IPv4, and on AKS the feature is still generally in beta and there are other options that we could try. I haven't found anything on my specific issue when using the azure-npm implementation but since it works fine in Minikube on IPv4 I'll assume that it would work fine in Azure as well once a "working" controller is set up.

Some resources I found for the Azure issue:

Tyler Camp
  • 177
  • 15
  • @ Tyler- How you tested your network policy with IPv4 ? by which command (wget/curl/telnet/ping) you confirmed that your network policy working as expected for https and http port ? – solveit Jul 07 '21 at 03:16
  • I likely used wget to test the network policy – Tyler Camp Mar 29 '22 at 10:09