0

I am running k3s cluster on RPis with Ubuntu Server 20.04 and have a drive mounted onto the master node to serve as shared NFS storage.

For some reason I can only access the server using the ClusterIP of the service, but no the name. CoreDNS is running and the logs suggest that it's working fine, but it's not used by the test pod that I try to mount the NFS share onto.

There is a question for Raspbian case, but I am running Ubuntu, not sure if using legacy iptables as suggested there will work or break things further.

Any help is appreciated! Details below.

NFS server manifests:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-server-pod
  namespace: storage
  labels:
        nfs-role: server
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
        nfs-role: server
  template:
    metadata:
      labels:
        nfs-role: server
    spec:
      nodeSelector:
        storage: enabled
      containers:
        - name: nfs-server-container
          image: ghcr.io/two-tauers/nfs-server:0.1.14
          securityContext:
            privileged: true
          args:
            - /storage
          volumeMounts:
          - name: storage-mount
            mountPath: /storage
      volumes:
      - name: storage-mount
        hostPath:
          path: /storage

---
kind: Service
apiVersion: v1
metadata:
  name: nfs-server
  namespace: storage
spec:
  selector: 
        nfs-role: server
  type: ClusterIP
  ports:
    - name: tcp-2049
      port: 2049
      protocol: TCP
    - name: udp-111
      port: 111
❯ kubectl logs pod/nfs-server-pod-ccd9c4877-lgd4x -n storage
 * Exporting directories for NFS kernel daemon...
   ...done.
 * Starting NFS kernel daemon
   ...done.
Setting up watches.
Watches established.
❯ kubectl get service/nfs-server -n storage
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE
nfs-server   ClusterIP   10.43.223.27   <none>        2049/TCP,111/TCP   102m

Test pod, not in a running state:

---
kind: Pod
apiVersion: v1
metadata:
  name: test-nfs
  namespace: storage
spec:
  volumes:
    - name: nfs-volume
      nfs: 
        server: nfs-server # doesn't work
        ### server: nfs-server.storage.svc.cluster.local # doesn't work
        ### server: 10.43.223.27 # works
        path: /
  containers:
    - name: test
      image: alpine
      volumeMounts:
        - name: nfs-volume
          mountPath: /var/nfs
      command: ["/bin/sh"]
      args: ["-c", "while true; do date >> /var/nfs/dates.txt; sleep 5; done"]

Events:

Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    51m                   default-scheduler  Successfully assigned storage/test-nfs to sauron
  Warning  FailedMount  9m26s (x14 over 48m)  kubelet            MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs nfs-server:/ /var/lib/kubelet/pods/d1523d5f-60ec-4990-ae61-369a26b7a6f4/volumes/kubernetes.io~nfs/nfs-volume
Output: mount.nfs: Connection timed out
  Warning  FailedMount  8m12s (x15 over 49m)  kubelet  Unable to attach or mount volumes: unmounted volumes=[nfs-volume], unattached volumes=[nfs-volume kube-api-access-pc9bk]: timed out waiting for the condition
  Warning  FailedMount  3m44s (x5 over 46m)   kubelet  Unable to attach or mount volumes: unmounted volumes=[nfs-volume], unattached volumes=[kube-api-access-pc9bk nfs-volume]: timed out waiting for the condition

At the same time, nslookup works and coredns does produce a lookup log:

---
apiVersion: v1
kind: Pod
metadata:
  name: dnsutils
  namespace: storage
spec:
  containers:
  - name: dnsutils
    image: k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
❯ kubectl exec -i -t dnsutils -n storage -- nslookup nfs-server && kubectl logs pod/coredns-84c56f7bfb-66s84 -n kube-system | tail -1
Server:     10.43.0.10
Address:    10.43.0.10#53

Name:   nfs-server.storage.svc.cluster.local
Address: 10.43.223.27

[INFO] 10.42.1.77:37937 - 21822 "A IN nfs-server.storage.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000980753s

❯ kubectl exec -i -t dnsutils -n storage -- nslookup kubernetes.default.svc.cluster.local && kubectl logs pod/coredns-84c56f7bfb-66s84 -n kube-system | tail -1
Server:     10.43.0.10
Address:    10.43.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.43.0.1

[INFO] 10.42.1.77:59077 - 58999 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000496515s
timberhill
  • 67
  • 1
  • 7

0 Answers0