Update 2022-03-22: I could isolate the problem to the Cluster Autoscaler and not enough pod "slots" left on a node. No solution still. For a detailed analysis see https://github.com/tektoncd/pipeline/issues/4699
I have an EKS cluster running with aws-ebs controller. Now I want to use tekton on this cluster. Tekton has an affinity assistant which should schedule pods on the same node if they share a workspace (aka volumeClaim). Sadly, this does not seem to work for me, as I randomly get an error from my node stating didn't match pod affinity rules
and didn't find available persistent volume to bind
even there is a volumne existing. After debugging, I found
that the persistentVolumes created from time to time are in a different region and on another host than the pod which is spanned.
Does somebody know how to still use “automatic” aws-ebs provisioning with tekton on EKS or something similar, making this work? My fallback solution would be to try S3 as a storage ... but I assume this maybe not the best solution as I have many small files from a git repository. Just provisioning a volume and then running pods only on this one node, is not the solution I would opt for. Even this is better than nothing :)
Any help would be appreciated! If more information is needed, please add a comment and I try to follow up.
-Thanks a lot!