14

argoproj/argocd:v1.8.7

Have a helm chart (1 with ingress, 1 with deployment/service/cm). It has sync policies of automated (prune and self-heal). When trying to delete them from the argocd dashboard, they are getting deleted (no more on the k8s cluster), however the status on the dashboard has been stuck at Deleting.

enter image description here

If I try to click sync, it shows -> Unable to deploy revision: application is deleting. Any ideas why it's stuck on Deleting status even though all resources have been deleted already ? Is there a way to refresh the status in the dashboard to reflect that actual state?

Thanks!

================

Update: After doing cascade delete, this is the screenshot (i've removed the app names that why it's white for some part) enter image description here

Doing kubectl get all -A shows all resources isn't present anymore (e.g even the cm, svc, deploy, etc)

lorraine batol
  • 6,001
  • 16
  • 55
  • 114
  • How did you verify all the resources have been properly deleted? Did you execute any ```kubectl get all``` commands inside the cluster? What was its output? – AkshayBadri May 21 '21 at 13:51
  • @B.Akshay, yes i've verified using kubectl get all -A and it's no longer there (i've updated my post as well) – lorraine batol May 24 '21 at 08:59
  • Can you try this @villager. There is an option to perform Hard Refresh. You can click the drop-down next to Refresh and click ```Hard Refresh```. – AkshayBadri May 26 '21 at 15:02
  • I ran into something similar and found that the application.yaml was within a git repo directory of another application and would never delete. Moving out of the repo solved it. Similar to some other comments here. – Colby Blair Apr 03 '23 at 17:03

2 Answers2

19

I was actually able to make this work by updating the Application yaml:

  1. Add spec.syncPolicy.allowEmpty: true
  2. Remove metadata.finalizers

The working version without getting stuck at Deleting status:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
    name: service-name
    namespace: argocd
spec:
    destination:
        server: https://kubernetes.default.svc
        namespace: argocd
    project: proj-name
    source:
        path: service-name
        repoURL: ssh://...git
        targetRevision: dev
        helm:
            valueFiles:
                - ../values.yaml
                - ../values_version_dev.yaml
    syncPolicy:
        automated:
            prune: true
            allowEmpty: true
            selfHeal: true
lorraine batol
  • 6,001
  • 16
  • 55
  • 114
1

This has happened to me several times. In every case it was because I had two declarations of applications of the same name.

Peter V. Mørch
  • 13,830
  • 8
  • 69
  • 103