5

So, I'm trying to install a chart with helm3 to kubernetes cluster(EKS). I have a terraform configuration bellow. The actual cluster is active and visible

variable "aws_access_key" {}
variable "aws_secret_key" {}

locals {
  cluster_name = "some-my-cluster"
}

provider "aws" {
  region = "eu-central-1"
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
}


data "aws_eks_cluster" "cluster" {
  name = local.cluster_name
}

data "aws_eks_cluster_auth" "cluster" {
  name = data.aws_eks_cluster.cluster.name
}

output "endpoint" {
  value = data.aws_eks_cluster.cluster.endpoint
}

output "kubeconfig-certificate-authority-data" {
  value = data.aws_eks_cluster.cluster.certificate_authority.0.data
}

output "identity-oidc-issuer" {
  value = "${data.aws_eks_cluster.cluster.identity.0.oidc.0.issuer}"
}

provider "kubernetes" {
  version                = "~>1.10.0"
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}

provider "helm" {
  version                = "~>1.0.0"
  debug = true
  alias = "my_helm"

  kubernetes {
    host = data.aws_eks_cluster.cluster.endpoint
    token = data.aws_eks_cluster_auth.cluster.token
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    load_config_file = false
  }
}

data "helm_repository" "stable" {
  name = "stable"
  url  = "https://kubernetes-charts.storage.googleapis.com"
}

resource "helm_release" "mydatabase" {
  provider  = helm.my_helm
  name  = "mydatabase"
  chart = "stable/mariadb"
  namespace = "default"

  set {
    name  = "mariadbUser"
    value = "foo"
  }

  set {
    name  = "mariadbPassword"
    value = "qux"
  }
}

When I run terraform apply I see an error: Error: Kubernetes cluster unreachable

Any thoughts? Will also appreciate some ideas how to debug the issue - the debug option doesn't work.

Can confirm that it works with newly created cluster.

kharandziuk
  • 12,020
  • 17
  • 63
  • 121
  • did you found a solution ? – vinni_f Mar 20 '20 at 17:49
  • I didn't. I just recreated cluster and it works. you can try this configuration https://github.com/kharandziuk/eks-getting-started-helm3 So, it probably an issue with policies or similar. I would recommend you to run helm itself of even try to manually create templates for k8s. It will give you a chance to debug the issue – kharandziuk Mar 20 '20 at 18:51
  • 1
    Deleting terraform state S3 bucket on AWS solved the issue in my case. I had the similar issue and the same error message: https://stackoverflow.com/questions/66427129/terraform-error-kubernetes-cluster-unreachable-invalid-configuration/66427130#66427130 – Mykhailo Skliar Mar 01 '21 at 17:57
  • In fact, I was able to resolve this by doing a `terraform refresh` I would guess there is somewhere cached an expired token that isn't getting refreshed properly. – shaunc Nov 24 '21 at 23:07

1 Answers1

1

The solution to this problem has to do with the kubectl provider. The only workaround that I could find that works is to replace the token request with the one I put below

provider "kubernetes" {
    version                = "~>1.10.0"
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    exec {
        api_version = "client.authentication.k8s.io/v1alpha1"
        args        = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
        command     = "aws"
     }
     load_config_file       = false
}
ctaglia
  • 150
  • 3