1

I have a kubernetes cluster which has two worker nodes. I have pointed the coredns to forward any DNS requests that matches ".com" domain to a remote server.

.com:53 {
      forward . <remote machine IP>
    }

Let's say, pod-0 sits in worker-0 and pod-1 sits in worker-1.

When I uninstall pods and reinstall it, there are chances that pods gets assigned to different worker nodes.

Is there a possibility coredns will resolve the pod hostname to its worker node IP?

It would be really helpful if someone has an approach to handle this issue. Thanks in advance!

anonymous user
  • 257
  • 5
  • 23

2 Answers2

0

there is a work around for this issue you can use node selectors for deploying your pods on the same node. If you don’t want to do it in this way, if you are implementing this via a pipeline you can add a few steps to your pipeline for making the entries the flow goes as below.

Trigger CI/CD pipeline → Pod getting deployed → execute kubectl command for getting pods on each node → ssh into the remote machine give sudo privileges if required & change the required config files.

Use the below command to get details of pods running on a particular node

kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<node>

  • Thanks for your reply. As a note, I am also looking for the pod to be scheduled in any of available node unless it waits for the same node everytime. i.e, pods need not to have node affinity. Because functionally my pod will work except this dns resolution. So I don't see an exact need (functionally) for the pod to be scheduled onto same node. Is there any other possibility to achieve this without having node affinity ? – anonymous user Feb 13 '23 at 14:38
0

Have you tried using Node Affinity. You can schedule a given pod always to the same Node using node labels. Simply you can use kubernetes.io/hostname label key to select the Node as below:

First Pod

apiVersion: v1
kind: Pod
metadata:
  name: pod-0
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker1 
  hostname: pod-0.test.com
  containers:
    ...
    ...

Second Pod

apiVersion: v1
kind: Pod
metadata:
  name: pod-1
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker2
  hostname: pod-1.test.com
  containers:
    ...
    ...
Amila Senadheera
  • 12,229
  • 15
  • 27
  • 43
  • Thanks for your reply. With this case you mentioned, I will not get issues when I reinstall pods (which I mentioned in question). But DNS resolution of pod hostname should happen once the pods are getting scheduled. How could I add dns entries(which will not change every time as per your answer) in the remote machine once the pods gets scheduled? My coredns is having a forwarder configuration where hostname matches ".com" will be forwarded to remote machine server for dns resolution. As a note, I am looking for the pod to be scheduled in any of available node unless it waits for the same node – anonymous user Feb 13 '23 at 14:32
  • How the traffic is coming to your pod? Do you have node port service? Are your node IPs public and do they change over time? It's not clear to me – Amila Senadheera Feb 13 '23 at 14:46
  • Yes I do have node port service and the traffic to my pod will come via 80 port where one of the container inside the pod will bind/listen to. My node ips will have both private and public IPs. Those Ips will not change. But for eg pod-0 sits in wn-0 of Ip 10.x.y.z and pod-1 sits in wn-1 of Ip 10.a.b.c. Pod's Ip will be same as wn's IP. When I reinstall pods and I don't need to have node affinity, pod-0 might sit in wn-1 so that time pod-0's Ip will differ. So I need to again change the Dns entries in remote machine and have to restart unbound and coredns. – anonymous user Feb 14 '23 at 20:06