We are using KEDA for autoscaling our AzureDevops agent in AKS cluster. We used scaledJob object for scaling purpose as SclaedObject
deployment was showing unexpected behaviors while executing pipelines and was getting scaled down even when pipelines are getting executed.
The Below scaledjob resolved the unexpected behavior , however we are facing some concerns as below.
apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
name: azdevops-scaledjob
spec:
jobTargetRef:
template:
spec:
containers:
- name: azdevops-agent-job
image: vstsimage
imagePullPolicy: Always
env:
- name: AZP_URL
value: [MYAZPURL]
- name: AZP_TOKEN
value: [MYAZPTOKEN]
- name: AZP_POOL
value: [MYAZPPOOL]
volumeMounts:
- mountPath: /mnt
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: azure-pvc
pollingInterval: 30
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
maxReplicaCount: 10
scalingStrategy:
strategy: "default"
triggers:
- type: azure-pipelines
metadata:
poolID: "xxx"
organizationURLFromEnv: "AZP_URL"
personalAccessTokenFromEnv: "AZP_TOKEN"
we are using a Azure DevOps pool where we have vm based agents as well with this dockeragent pools. its noticed that scaleup happening with multiple replicas even though there are not much pipelines in que. how we can control this
The scaled own of the created jobs are not happening even when no pipelines are executing
Deleted scaled jobs from the cluster not removing the agent entry from the Azure DevOps agent pool.