25

Init container with kubectl get pod command is used to get ready status of other pod.

After Egress NetworkPolicy was turned on init container can't access Kubernetes API: Unable to connect to the server: dial tcp 10.96.0.1:443: i/o timeout. CNI is Calico.

Several rules were tried but none of them are working (service and master host IPs, different CIDR masks):

...
  egress:
  - to:
    - ipBlock:
        cidr: 10.96.0.1/32
    ports:
    - protocol: TCP
      port: 443
...

or using namespace (default and kube-system namespaces):

...
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: default
    ports:
    - protocol: TCP
      port: 443
...

Looks like ipBlock rules just don't work and namespace rules don't work because kubernetes api is non-standard pod.

Can it be configured? Kubernetes is 1.9.5, Calico is 3.1.1.

Problem still exists with GKE 1.13.7-gke.8 and calico 3.2.7

schnatterer
  • 7,525
  • 7
  • 61
  • 80
Igor Stepin
  • 378
  • 3
  • 7

5 Answers5

14

You need to get the real ip of the master using kubectl get endpoints --namespace default kubernetes and make an egress policy to allow that.

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1 
metadata:
  name: allow-apiserver
  namespace: test
spec:
  policyTypes:
  - Egress
  podSelector: {}
  egress:
  - ports:
    - port: 443
      protocol: TCP
    to:
    - ipBlock:
        cidr: x.x.x.x/32
Doctor
  • 7,115
  • 4
  • 37
  • 55
Dave McNeill
  • 443
  • 4
  • 12
  • 6
    Is it possible for the master's IP to change? If so, this configuration may break when it does. – Puma Jul 09 '20 at 19:30
  • 1
    Make sure that you are also using the correct port. The 443 port used inside the pod may have been changed outside the pod to something like 4443. `get endpoints --namespace default kubernetes -o wide` lists the IP address + port. – Andrei Damian-Fekete Mar 02 '21 at 10:30
  • This works but I'm a bit skeptical about if this will potentially break if IP updates. In which case for a less strict range you can use `cidr: 10.0.0.0/8` to allow access generally inside the cluster – Pithikos Mar 01 '23 at 09:03
1

Had the same issue when using ciliumnetworkpolicy with helm. For anyone having a similar issue, something like this should work:

{{- $kubernetesEndpoint := lookup "v1" "Endpoints" "default" "kubernetes" -}}
{{- $kubernetesAddress := (first $kubernetesEndpoint.subsets).addresses -}}
{{- $kubernetesIP := (first $kubernetesAddress).ip -}}
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  ...
spec:
  ...
  egress:
    - toCIDRSet:
        - cidr: {{ $kubernetesIP }}/32
    ...
adelmoradian
  • 312
  • 3
  • 3
  • 1
    `toEntities: [ kube-apiserver ]` – no need to look up the IP range if you can use the built-in functionality. Only works for Cilium. – AndiDog Jun 15 '23 at 16:04
0

Update: Try Dave McNeill's answer first.

If it does not work for you (it did for me!), the following might be a workaround:

  podSelector:
    matchLabels:
      white: listed
  egress:
    - to:
        - ipBlock:
            cidr: 0.0.0.0/0

This will allow accessing the API server - along with all other IP addresses on the internet :-/

You can combine this with the DENY all non-whitelisted traffic from a namespace rule to deny egress for all other pods.

schnatterer
  • 7,525
  • 7
  • 61
  • 80
  • VERY INSECURE. What's the point to even use network policies if you allow access to everything? – Pithikos Mar 01 '23 at 08:36
  • It was a workaround before the other answer came up, as described in the answer. The point was that all ingress and egress was allow-listed and specific pods where allowed egress. Still a lot better than not using netpols at all, dont you think? – schnatterer Mar 01 '23 at 09:08
0

We aren't on GCP, but the same should apply.

We query AWS for the CIDR of our master nodes and use this data as values for helm charts creating the NetworkPolicy for the k8s API access.

In our case the masters are part of an auto-scaling group, so we need the CIDR. In your case the IP might be enough.

Christian
  • 1,487
  • 1
  • 14
  • 11
0

You can allow egress traffic to the Kubernetes API endpoints IPs and ports.

You can get the endpoints by running $ kubectl get endpoints kubernetes -oyaml.

I don't understand why it doesn't work to just allow traffic to the cluster IP of the kubernetes service in the default namespace (what is in the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars), but anyway, it works to allow traffic to the underlying endpoints.

To do this in a Helm chart template, you could do something like:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ...
spec:
  podSelector: ...
  policyTypes:
    - Egress
  egress:
    {{- range (lookup "v1" "Endpoints" "default" "kubernetes").subsets }}
    - to:
        {{- range .addresses }}
        - ipBlock:
            cidr: {{ .ip }}/32
        {{- end }}
      ports:
        {{- range .ports }}
        - protocol: {{ .protocol }}
          port: {{ .port }}
        {{- end }}
    {{- end }}
aude
  • 1,372
  • 16
  • 20