18

I have a dynamic PersistentVolume provisioned using PersistentVolumeClaim.

I would like to keep the PV after the pod is done. So I would like to have what persistentVolumeReclaimPolicy: Retain does.

However, that is applicable to PersistentVolume, not PersistentVolumeClaim (AFAIK).

How can I change this behavior for dynamically provisioned PV's?

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
    name: {{ .Release.Name }}-pvc
spec:
    accessModes:
      - ReadWriteOnce
    storageClassName: gp2
    resources:
        requests:
            storage: 6Gi

---
kind: Pod
apiVersion: v1
metadata:
    name: "{{ .Release.Name }}-gatling-test"
spec:
    restartPolicy: Never
    containers:
      - name: {{ .Release.Name }}-gatling-test
        image: ".../services-api-mvn-builder:latest"
        command: ["sh", "-c", 'mvn -B gatling:test -pl csa-testing -DCSA_SERVER={{ template "project.fullname" . }} -DCSA_PORT={{ .Values.service.appPort }}']
        volumeMounts:
          - name: "{{ .Release.Name }}-test-res"
            mountPath: "/tmp/testResults"

    volumes:
      - name: "{{ .Release.Name }}-test-res"
        persistentVolumeClaim:
          claimName: "{{ .Release.Name }}-pvc"
          #persistentVolumeReclaimPolicy: Retain  ???
PrasadK
  • 778
  • 6
  • 17
Ondra Žižka
  • 43,948
  • 41
  • 217
  • 277

4 Answers4

9

This is not the answer to the OP, but the answer to the personal itch that led me here is that I don't need reclaimPolicy: Retain at all. I need a StatefulSet instead. Read on if this is for you:

My itch was to have a PersistentVolume that got re-used over and over by the container in a persistent way; the way that is the default behavior when using docker and docker-compose volumes. So that a new PersistentVolume only gets created once:

# Create a new PersistentVolume the very first time
kubectl apply  -f my.yaml 

# This leaves the "volume" - the PersistentVolume - alone
kubectl delete -f my.yaml

# Second and subsequent times re-use the same PersistentVolume
kubectl apply  -f my.yaml 

And I thought the way to do that was to declare a PersistentVolumeClaim with reclaimPolicy: Retain and then reference that in my deployment. But even when i got reclaimPolicy: Retain working, a brand new PersistentVolume still got created on every kubectl apply. reclaimPolicy: Retain just ensured that the old ones didn't get deleted.

But no. The way to achieve this use-case is with a StatefulSet. It is way simpler, and then it behaves like I'm used to with docker and docker-compose.

Peter V. Mørch
  • 13,830
  • 8
  • 69
  • 103
  • Neither `Retain` policy nor `StatefulSets` are required to re-use data between `Pod` runs (both are good practice though, e.g. for databases). it's enough to split your stateful app's config into *two YAML files*: single-use for PVC provisioning, and multi-use for provisioning other - more variable - resources such as `Deployments`. – mirekphd Jul 03 '23 at 10:12
8

Workaround would be to create new StorageClass with reclaimPolicy: Retain and use that StorageClass every where.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2-retain
  annotations:
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4 
reclaimPolicy: Retain

PS: The reclaimPolicy of the existing StorageClass can't edited, but you can delete the StorageClass and recreate it with reclaimPolicy: Retain

Tummala Dhanvi
  • 3,007
  • 2
  • 19
  • 35
  • The problem with this, is that I now went from something portable (`storageClassName: default`) to introducing something AWS specific by hard-coding `provisioner: kubernetes.io/aws-ebs`. :-( – Peter V. Mørch Jan 03 '21 at 06:52
  • Correct, You can also change the default storage class to this one if required. – Tummala Dhanvi Jan 04 '21 at 07:10
  • Yeah, but I can't declare a `StorageClass` where the only thing I declare is `reclaimPolicy: Retain` so my .yaml files will be portable. – Peter V. Mørch Jan 04 '21 at 10:04
6

you can config it in pv.yaml or storageclass.yaml or take a patch to exit pv

pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /tmp
    server: 172.17.0.2

storageclass.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2-retain
  annotations:
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4 
reclaimPolicy: Retain

take a patch

kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

will
  • 91
  • 1
  • 7
  • 1
    +1 for showing how to patch an already-provisioned volume. This is useful when you don't want to destroy a volume created with the wrong policy. – Robin Elvin Apr 14 '22 at 15:34
4

There is an issue on Kubernetes Github about Reclaim Policy of dynamically provisioned volumes.

A short answer is "no" - you cannot set the policy.

Here is the small quote from the dialogue in the ticket on how to avoid the PV deletion:

Speedline: Stumbled upon this and I'm going through a similar issue. I want to create an Elasticsearch cluster but make sure that if the cluster goes down for whatever reason, the data stored on the persistent disks get maintained across the restart. I currently have one a PersistentVolumeClaim for each of the deployment of elasticsearch that carries data.

jsafrane: @speedplane: it is maintained as long as you don't delete the PVC. Reclaim policy is executed only if kuberenetes sees a PV that was bound to a PVC and the PVC does not exist.

@jsafrane okay, got it. So just have to be careful with the PVCs, deleting one is like deleting all the data on the disk.

Anton Kostenko
  • 8,200
  • 2
  • 30
  • 37
  • Unfortunately, in my case, this is within `helm test`, and Helm deletes all resources at the end. The project is iced but I'll try later.. – Ondra Žižka Nov 29 '18 at 18:07