0

I have a node pool that has only one node then I tried to upsize it to two nodes that I got the error :1 node(s) had volume node affinity conflict.(I'm using AKS)

I tried to also downsize to return it back to one node but the error remained.

I throughlu read this articl with people having same issues but for different cuase Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict

I understnad that this conflict might be becuase the pvc is secduling on different zones.

I tried to invstigate the issue, and I see that the failure-domain.beta.kubernetes.io is not preesnt there

kubectl describe node aks-agentpool-10306775-0


Name:               aks-agentpool-10306775-0
Roles:              agent
Labels:             agentpool=agentpool
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.azure.com/agentpool=agentpool
                    kubernetes.azure.com/cluster=[cluster name]
                    kubernetes.azure.com/mode=system
                    kubernetes.azure.com/role=agent
                    kubernetes.azure.com/storageprofile=managed
                    kubernetes.azure.com/storagetier=Premium_LRS
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=aks-agentpool-10306775-0
                    kubernetes.io/os=linux
                    kubernetes.io/role=agent
                    node-role.kubernetes.io/agent=
                    node.kubernetes.io/instance-type=Standard_DS2_v2
                    storageprofile=managed
                    storagetier=Premium_LRS
                    topology.disk.csi.azure.com/zone=
                    topology.kubernetes.io/region=uksouth
                    topology.kubernetes.io/zone=1

I'm not sure that it should be there to be honest.. most of the answer in the referred article is due to have different zones. but for me I only have one node, one zone, one pv and one pvc.

When I dsecribe the pv i can see the zone and failure-domain.beta.kubernetes.io

kubectl get pv


NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS      REASON   AGE
pvc-4d65941a-3226-4078-a544-85dec3efe68a   128Gi      RWO            Delete           Bound    default/mssql-data   managed-premium            2y93d`

This is the description

kubectl describe pv pvc-aa4c6841-daee-4d36-bedc-f678704d73f8


Name:              pvc-aa4c6841-daee-4d36-bedc-f678704d73f8
Labels:            failure-domain.beta.kubernetes.io/region=uksouth

Would you please advise ?

I'm tring to find the cause of the issue of my problem and how to slove it

The storage class:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azure-disk
provisioner: kubernetes.io/azure-disk
parameters:
  storageaccounttype: Standard_LRS
  kind: Managed
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mssql-data
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed
  resources:
    requests:
      storage: 8Gi

I'm trying to find the issue of the problem and how to solve it

Please not that the issue started after I upgraded the k8 to 1.26.3 from 1.25.x

0 Answers0