1

I'm having experiencing an issue with my AGH pod where it has to be reconfigured every time the container shuts down; be it manually, or at server restart.

These are the various YAMLs:

Namespace

---
apiVersion: v1
kind: Namespace
metadata:
  name: adguard

PV

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: adguard-data-pv
  namespace: adguard
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/tank/apps/adguard/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: adguard-conf-pv
  namespace: adguard
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/tank/apps/adguard/conf"

PVC

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: adguard-data-pvc
  namespace: adguard
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: adguard-data-pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: adguard-conf-pvc
  namespace: adguard
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: adguard-conf-pv

ConfigMap

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: adguard-config
  namespace: adguard
data:
  AdGuardHome.yaml: |
    bind_host: 0.0.0.0
    bind_port: 3000
    auth_name: "admin"
    auth_pass: "[REDACTED]"
    language: "en"
    rlimit_nofile: 0
    rlimit_nproc: 0
    log_file: ""
    log_syslog: false
    log_syslog_srv: ""
    pid_file: ""
    verbose: false

Deployment

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: adguard-deployment
  namespace: adguard
spec:
  replicas: 1
  selector:
    matchLabels:
      app: adguard
  template:
    metadata:
      labels:
        app: adguard
    spec:
      containers:
        - name: adguard-home
          image: adguard/adguardhome:latest
          resources:
            requests:
              memory: "128Mi"
              cpu: "250m"
            limits:
              memory: "512Mi"
              cpu: "1000m"
          env:
            - name: AGH_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: adguard-config
                  key: AdGuardHome.yaml
          ports:
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 53
              name: dns-udp
              protocol: UDP
            - containerPort: 67
              name: dhcp-one
              protocol: UDP
            - containerPort: 68
              name: dhcp-two
              protocol: TCP
            - containerPort: 68
              name: dhcp-three
              protocol: UDP
            - containerPort: 80
              name: http-tcp
              protocol: TCP
            - containerPort: 443
              name: doh-tcp
              protocol: TCP
            - containerPort: 443
              name: doh-udp
              protocol: UDP
            - containerPort: 3000
              name: http-initial
            - containerPort: 784
              name: doq-one
              protocol: UDP
            - containerPort: 853
              name: dot
              protocol: TCP
            - containerPort: 853
              name: doq-two
              protocol: UDP
            - containerPort: 5443
              name: dnscrypt-tcp
              protocol: TCP
            - containerPort: 5443
              name: dnscrypt-udp
              protocol: UDP
          volumeMounts:
            - name: adguard-data
              mountPath: /opt/adguardhome/work
            - name: adguard-conf
              mountPath: /opt/adguardhome/conf
      volumes:
        - name: adguard-data
          persistentVolumeClaim:
            claimName: adguard-data-pvc
        - name: adguard-conf
          persistentVolumeClaim:
            claimName: adguard-conf-pvc

Service

---
apiVersion: v1
kind: Service
metadata:
  name: adguard-service
  namespace: adguard
spec:
  selector:
    app: adguard
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000
      name: http-initial
    - protocol: TCP
      port: 80
      targetPort: 80
      name: http-tcp
    - protocol: UDP
      port: 53
      targetPort: 53
      name: dns-udp
    - protocol: TCP
      port: 53
      targetPort: 53
      name: dns-tcp
    - protocol: UDP
      port: 67
      targetPort: 67
      name: dhcp-one
    - protocol: TCP
      port: 68
      targetPort: 68
      name: dhcp-two
    - protocol: UDP
      port: 68
      targetPort: 68
      name: dhcp-three
    - protocol: TCP
      port: 443
      targetPort: 443
      name: doh-tcp
    - protocol: UDP
      port: 443
      targetPort: 443
      name: doh-udp
    - protocol: UDP
      port: 784
      targetPort: 784
      name: doq-one
    - protocol: TCP
      port: 853
      targetPort: 853
      name: dot
    - protocol: UDP
      port: 853
      targetPort: 853
      name: doq-two
    - protocol: TCP
      port: 5443
      targetPort: 5443
      name: dnscrypt-tcp
    - protocol: UDP
      port: 5443
      targetPort: 5443
      name: dnscrypt-udp
  type: LoadBalancer
  externalTrafficPolicy: Local

I have to admit that I am new to Kubernetes, so maybe I am doing something wrong? I do, however, find it puzzling that having deployed Plex in a similar fashion seems to work just fine; I can stop, destroy, etc. and re-deploy it, and it starts as if nothing ever happened.

I'm using microk8s and metallb over ZFS (for the data).

telometto
  • 111
  • 5

0 Answers0