1

Do I still need to expose pod via clusterip service?

There are 3 pods - main, front, api. I need to allow ingress+egress connection to main pod only from the pods- api and frontend. I also created service-main - service that exposes main pod on port:80.

I don't know how to test it, tried:

k exec main -it -- sh
netcan -z -v -w 5 service-main 80

and

k exec main -it -- sh
curl front:80

The main.yaml pod:

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: main
    item: c18
  name: main
spec:
  containers:
  - image: busybox
    name: main
    command:
    - /bin/sh
    - -c
    - sleep 1d

The front.yaml:

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: front
  name: front
spec:
  containers:
  - image: busybox
    name: front
    command:
    - /bin/sh
    - -c
    - sleep 1d

The api.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: api
  name: api
spec:
  containers:
  - image: busybox
    name: api
    command:
    - /bin/sh
    - -c
    - sleep 1d  

The main-to-front-networkpolicy.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: front-end-policy
spec:
  podSelector:
    matchLabels:
      app: main
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: front
    ports:
    - port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels:
           app: front
    ports:
    - port: 8080

What am I doing wrong? Do I still need to expose main pod via service? But should not network policy take care of this already?

Also, do I need to write containerPort:80 in main pod? How to test connectivity and ensure ingress-egress works only for main pod to api, front pods?

I tried the lab from ckad prep course, it had 2 pods: secure-pod and web-pod. There was issue with connectivity, the solution was to create network policy and test using netcat from inside the web-pod's container:

k exec web-pod -it -- sh
nc -z -v -w 1 secure-service 80
connection open

UPDATE: ideally I want answers to these:

  • a clear explanation of the diff btw service and networkpolicy. If both service and netpol exist - what is the order of evaluation that the traffic/request goes thru? It first goes thru netpol then service? Or vice versa?

  • if I want front and api pods to send/receive traffic to main - do I need separate services exposing front and api pods?

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
ERJAN
  • 23,696
  • 23
  • 72
  • 146
  • Which Kubernetes version are you using? – Mikolaj S. Jan 25 '22 at 18:52
  • @MikolajS. in the katacoda scenario where i was running it, the "kubectl version" gives this output: "clientVersion": "major": "1", "minor": "20", "platform": "linux/amd64" "serverVersion": "major": "1", "minor": "20" – ERJAN Jan 25 '22 at 22:01

1 Answers1

2

Network policies and services are two different and independent Kubernetes resources.

Service is:

An abstract way to expose an application running on a set of Pods as a network service.

Good explanation from the Kubernetes docs:

Kubernetes Pods are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically.

Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.

This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?

Enter Services.

Also another good explanation in this answer.

For production you should use a workload resources instead of creating pods directly:

Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources. Here are some examples of workload resources that manage one or more Pods:

And use services to make requests to your application.

Network policies are used to control traffic flow:

If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.

Network policies target pods, not services (an abstraction). Check this answer and this one.

Regarding your examples - your network policy is correct (as I tested it below). The problem is that your cluster may not be compatible:

For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. Project Calico or Cilium are plugins that do so. This is not the default when creating a cluster!

Test on kubeadm cluster with Calico plugin -> I created similar pods as you did, but I changed container part:

spec:
  containers:
    - name: main
      image: nginx
      command: ["/bin/sh","-c"]
      args: ["sed -i 's/listen  .*/listen 8080;/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
      ports:
      - containerPort: 8080

So NGINX app is available at the 8080 port.

Let's check pods IP:

user@shell:~$ kubectl get pods -o wide
NAME     READY   STATUS      RESTARTS        AGE   IP               NODE                                NOMINATED NODE   READINESS GATES
api      1/1     Running     0               48m   192.168.156.61   example-ubuntu-kubeadm-template-2   <none>           <none>
front    1/1     Running     0               48m   192.168.156.56   example-ubuntu-kubeadm-template-2   <none>           <none>
main     1/1     Running     0               48m   192.168.156.52   example-ubuntu-kubeadm-template-2   <none>           <none>

Let's exec into running main pod and try to make request to the front pod:

root@main:/# curl 192.168.156.61:8080
<!DOCTYPE html>
...
<title>Welcome to nginx!</title>

It is working.

After applying your network policy:

user@shell:~$ kubectl apply -f main-to-front.yaml 
networkpolicy.networking.k8s.io/front-end-policy created
user@shell:~$ kubectl exec -it main -- bash
root@main:/# curl 192.168.156.61:8080
...

Not working anymore, so it means that network policy is applied successfully.

Nice option to get more information about applied network policy is to run kubectl describe command:

user@shell:~$ kubectl describe networkpolicy front-end-policy
Name:         front-end-policy
Namespace:    default
Created on:   2022-01-26 15:17:58 +0000 UTC
Labels:       <none>
Annotations:  <none>
Spec:
  PodSelector:     app=main
  Allowing ingress traffic:
    To Port: 8080/TCP
    From:
      PodSelector: app=front
  Allowing egress traffic:
    To Port: 8080/TCP
    To:
      PodSelector: app=front
  Policy Types: Ingress, Egress
Mikolaj S.
  • 2,850
  • 1
  • 5
  • 17
  • ok, if i enable network policy for a pod but there is no service existing that exposes the pod - the pod will still be unreachable? – ERJAN Jan 27 '22 at 05:39
  • if we dont add containerPort:80 on the pod - does it make unavailable if i try to curl into it? – ERJAN Jan 27 '22 at 10:49
  • 1
    Without a service, the pod won't reachable outside the cluster, but within a cluster it will be reachable via pod IP - as I presented in my answer. – Mikolaj S. Jan 28 '22 at 13:12
  • `containerPort` is used for documentation purposes, it does not change anything - check [this answer](https://stackoverflow.com/questions/57197095/why-do-we-need-a-port-containerport-in-a-kuberntes-deployment-container-definiti), – Mikolaj S. Jan 28 '22 at 14:03