5

I am running a GKE cluster version 1.17.13-gke.1400.

I have applied the following network policy in my cluster -

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Which should block all communication to or from pods on the default namespace. However, it does not. As is evident from this test -

$ kubectl run p1 -it  --image google/cloud-sdk
root@p1:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=114 time=1.14 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=114 time=1.21 ms
^C
root@p1:/# curl www.google.com 
<!doctype html><html itemscope=" ...

From the docs, seems like this application should be pretty straight forward. Any help in understanding what I'm doing wrong, or tips for further troubleshooting, will be appreciated.

Thanks, Nimrod,

Nimrod Fiat
  • 473
  • 3
  • 12
  • Which network plugin is your Kubernetes cluster using? That means, does your cluster have the enforcement of network policies enabled at all? See https://cloud.google.com/kubernetes-engine/docs/tutorials/network-policy#step_1_create_a_cluster – Andreas Jägle Nov 26 '20 at 08:00
  • If so, please check the namespace you are using for your run command. Maybe you are pointing to a different one than default where your network policy is deployed for. – Andreas Jägle Nov 26 '20 at 08:03
  • @AndreasJägle you're right. The issue is that the cluster (also) has windows nodes. And GKE doesn't support network policies with windows nodes. – Nimrod Fiat Nov 26 '20 at 08:49
  • @AndreasJägle as you were the founder of the underlying issue, provide your comment as an answer for better visibility. – Dawid Kruk Nov 26 '20 at 09:52
  • Thanks for the hint @DawidKruk. I added my questions to look at as an answer to this question, so that future seekers might find some inspiration without reading these comments. Glad this was already helpful! – Andreas Jägle Nov 26 '20 at 13:58

2 Answers2

12

For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. Project Calico or Cilium are plugins that do so. This is not the default when creating a cluster!

So first, you should check if your cluster is set up accordingly as described in the Google Cloud Network Policies docs. This is somehow abstracted away behind the --enable-network-policy flag.

If it is enabled, you should see some calico pods in the kube-system namespace.

kubectl get pods --namespace=kube-system

If there is a plugin in place which enforces network policies, you need to make sure to have deployed the network policy in the desired namespace - and check if your test using kubectl run is executed in that namespace, too. You might have some other namespace configured in your kube context and not hit the default namespace with your command.

Andreas Jägle
  • 11,632
  • 3
  • 31
  • 31
0

To install Calico using manifests

Apply the Calico manifests to your cluster. These manifests create a DaemonSet in the kube-system namespace.

kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-operator.yaml

kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-crs.yaml

View the resources in the kube-system namespace.

kubectl get daemonset calico-node --namespace kube-system Output

The values in the DESIRED and READY columns should match. The values returned for you are different than the values in the following output.

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE calico-node 1 1 1 1 1 kubernetes.io/os=linux 26m

p K
  • 71
  • 1
  • 2