I am using kube-downscaler to scale down pods to 0, and deployed cluster autoscaler to decrease the number of nodes accordingly in EKS. Kube-downscaler is working good. We had to perform the test of cluster autoscaler with an ASG with minimum and desired capacity of 18 nodes. Before we did the test, we set the minimum capacity to 17, expecting that the cluster autoscaler will scale the nodes to 17. The nodes are quite underutilized, so if we set minimum of 5, it could go down even so much without using kube-downscaler to kill the pods.
The number of the nodes stayed 18, those are the logs:
`NodeGroups:
Name: XXXXXXXXXXXXX
Health: Healthy (ready=18 unready=0 notStarted=0 longNotStarted=0 registered=18 longUnregistered=0 cloudProviderTarget=18 (minSize=17, maxSize=21))
LastProbeTime: 2023-03-29 12:21:20.679261135 +0000 UTC m=+175246.866442473
LastTransitionTime: 2023-03-27 11:41:04.690130368 +0000 UTC m=+30.877311706
ScaleUp: NoActivity (ready=18 cloudProviderTarget=18)
LastProbeTime: 2023-03-29 12:21:20.679261135 +0000 UTC m=+175246.866442473
LastTransitionTime: 2023-03-27 11:41:04.690130368 +0000 UTC m=+30.877311706
ScaleDown: NoCandidates (candidates=0)
LastProbeTime: 2023-03-29 12:21:20.679261135 +0000 UTC m=+175246.866442473
LastTransitionTime: 2023-03-27 11:41:04.690130368 +0000 UTC m=+30.877311706`
I saw that desired capacity could be adjusted by autoscaling policies, but it was mentioned only CloudWatch, I didn't find specific information about the cluster autoscaler? Does someone has experience with that?
https://stackoverflow.com/questions/36270873/aws-ec2-auto-scaling-groups-i-get-min-and-max-but-whats-desired-instances-lim