I'm trying to configure mongodb deployment inside k8s world. My mongo deployment file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-mongo
template:
metadata:
labels:
component: panel-admin-mongo
spec:
volumes:
- name: panel-admin-mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: panel-admin-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: panel-admin-mongo-storage
mountPath: /data/db
Mongo service file:
apiVersion: v1
kind: Service
metadata:
name: panel-admin-mongo-cluster-ip-service
spec:
type: ClusterIP
selector:
component: panel-admin-mongo
ports:
- port: 27017
targetPort: 27017
And my persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
When I'm entering the mongodb container using:
kubectl exec -it panel-admin-mongo-deployment-6dcfc5b8c7-mk8d5 sh
and I'm saving some users email and password inside the collection (f.ex. users) everything works fine. But when I shot down the pod and container inside of it, boot up again, the data is gone. Shouldn't be independent of life-cycle of pod? And if yes what am I missing?