I have been using following snippet to manage kubernetes auto scaling with terraform
resource "helm_release" "cluster-autoscaler" {
depends_on = [
module.eks
]
name = "cluster-autoscaler"
namespace = local.k8s_service_account_namespace
repository = "https://kubernetes.github.io/autoscaler"
chart = "cluster-autoscaler"
version = "9.10.7"
create_namespace = false
While all of this has been in working state for months (Gitlab CI/CD), it has suddenly stopped working and throwing following error.
module.review_vpc.helm_release.cluster-autoscaler: Refreshing state... [id=cluster-autoscaler]
╷
│ Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
│
│ with module.review_vpc.helm_release.cluster-autoscaler,
│ on ..\..\modules\aws\eks.tf line 319, in resource "helm_release" "cluster-autoscaler":
│ 319: resource "helm_release" "cluster-autoscaler" {
I am using AWS EKS with kubernetes version 1.21.
The terraform providers are as follows
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.14.0"
}
}
UPDATE 1
Here is the module for eks
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "17.24.0"