71

I am facing an error while deploying deployment in CircleCI. Please find the configuration file below.

When running the kubectl CLI, we got an error between kubectl and the EKS tool of the aws-cli.

version: 2.1
orbs:
  aws-ecr: circleci/aws-ecr@6.3.0
  docker: circleci/docker@0.5.18
  rollbar: rollbar/deploy@1.0.1
  kubernetes: circleci/kubernetes@1.3.0
  deploy:
    version: 2.1
    orbs:
      aws-eks: circleci/aws-eks@1.0.0
      kubernetes: circleci/kubernetes@1.3.0
    executors:
      default:
        description: |
          The version of the circleci/buildpack-deps Docker container to use
          when running commands.
        parameters:
          buildpack-tag:
            type: string
            default: buster
        docker:
          - image: circleci/buildpack-deps:<<parameters.buildpack-tag>>
    description: |
      A collection of tools to deploy changes to AWS EKS in a declarative
      manner where all changes to templates are checked into version control
      before applying them to an EKS cluster.
    commands:
      setup:
        description: |
          Install the gettext-base package into the executor to be able to run
          envsubst for replacing values in template files.
          This command is a prerequisite for all other commands and should not
          have to be run manually.
        parameters:
          cluster-name:
            default: ''
            description: Name of the EKS Cluster.
            type: string
          aws-region:
            default: 'eu-central-1'
            description: Region where the EKS Cluster is located.
            type: string
          git-user-email:
            default: "deploy@mail.com"
            description: Email of the git user to use for making commits
            type: string
          git-user-name:
            default: "CircleCI Deploy Orb"
            description:  Name of the git user to use for making commits
            type: string
        steps:
          - run:
              name: install gettext-base
              command: |
                if which envsubst > /dev/null; then
                  echo "envsubst is already installed"
                  exit 0
                fi
                sudo apt-get update
                sudo apt-get install -y gettext-base
          - run:
              name: Setup GitHub access
              command: |
                mkdir -p ~/.ssh
                echo 'github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >> ~/.ssh/known_hosts
                git config --global user.email "<< parameters.git-user-email >>"
                git config --global user.name "<< parameters.git-user-name >>"
          - aws-eks/update-kubeconfig-with-authenticator:
              aws-region: << parameters.aws-region >>
              cluster-name: << parameters.cluster-name >>
              install-kubectl: true
              authenticator-release-tag: v0.5.1
      update-image:
        description: |
          Generates template files with the specified version tag for the image
          to be updated and subsequently applies that template after checking it
          back into version control.
        parameters:
          cluster-name:
            default: ''
            description: Name of the EKS Cluster.
            type: string
          aws-region:
            default: 'eu-central-1'
            description: Region where the EKS Cluster is located.
            type: string
          image-tag:
            default: ''
            description: |
              The tag of the image, defaults to the  value of `CIRCLE_SHA1`
              if not provided.
            type: string
          replicas:
            default: 3
            description: |
              The replica count for the deployment.
            type: integer
          environment:
            default: 'production'
            description: |
              The environment/stage where the template will be applied. Defaults
              to `production`.
            type: string
          template-file-path:
            default: ''
            description: |
              The path to the source template which contains the placeholders
              for the image-tag.
            type: string
          resource-name:
            default: ''
            description: |
              Resource name in the format TYPE/NAME e.g. deployment/nginx.
            type: string
          template-repository:
            default: ''
            description: |
              The fullpath to the repository where templates reside. Write
              access is required to commit generated templates.
            type: string
          template-folder:
            default: 'templates'
            description: |
              The name of the folder where the template-repository is cloned to.
            type: string
          placeholder-name:
            default: IMAGE_TAG
            description: |
              The name of the placeholder environment variable that is to be
              substituted with the image-tag parameter.
            type: string
          cluster-namespace:
            default: sayway
            description: |
              Namespace within the EKS Cluster.
            type: string
        steps:
          - setup:
              aws-region: << parameters.aws-region >>
              cluster-name: << parameters.cluster-name >>
              git-user-email: dev@sayway.com
              git-user-name: deploy
          - run:
              name: pull template repository
              command: |
                [ "$(ls -A << parameters.template-folder >>)" ] && \
                  cd << parameters.template-folder >> && git pull --force && cd ..
                [ "$(ls -A << parameters.template-folder >>)" ] || \
                  git clone << parameters.template-repository >> << parameters.template-folder >>
          - run:
              name: generate and commit template files
              command: |
                cd << parameters.template-folder >>
                IMAGE_TAG="<< parameters.image-tag >>"
                ./bin/generate.sh --file << parameters.template-file-path >> \
                  --stage << parameters.environment >> \
                  --commit-message "Update << parameters.template-file-path >> for << parameters.environment >> with tag ${IMAGE_TAG:-$CIRCLE_SHA1}" \
                  << parameters.placeholder-name >>="${IMAGE_TAG:-$CIRCLE_SHA1}" \
                  REPLICAS=<< parameters.replicas >>
          - kubernetes/create-or-update-resource:
              get-rollout-status: true
              namespace: << parameters.cluster-namespace >>
              resource-file-path: << parameters.template-folder >>/<< parameters.environment >>/<< parameters.template-file-path >>
              resource-name: << parameters.resource-name >>
jobs:
  test:
    working_directory: ~/say-way/core
    parallelism: 1
    shell: /bin/bash --login
    environment:
      CIRCLE_ARTIFACTS: /tmp/circleci-artifacts
      CIRCLE_TEST_REPORTS: /tmp/circleci-test-results
      KONFIG_CITUS__HOST: localhost
      KONFIG_CITUS__USER: postgres
      KONFIG_CITUS__DATABASE: sayway_test
      KONFIG_CITUS__PASSWORD: ""
      KONFIG_SPEC_REPORTER: true
    docker:
    - image: 567567013174.dkr.ecr.eu-central-1.amazonaws.com/core-ci:test-latest
      aws_auth:
        aws_access_key_id: $AWS_ACCESS_KEY_ID_STAGING
        aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_STAGING
    - image: circleci/redis
    - image: rabbitmq:3.7.7
    - image: circleci/mongo:4.2
    - image: circleci/postgres:10.5-alpine
    steps:
    - checkout
    - run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS
    # This is based on your 1.0 configuration file or project settings
    - restore_cache:
        keys:
        - v1-dep-{{ checksum "Gemfile.lock" }}-
        # any recent Gemfile.lock
        - v1-dep-
    - run:
        name: install correct bundler version
        command: |
          export BUNDLER_VERSION="$(grep -A1 'BUNDLED WITH' Gemfile.lock | tail -n1 | tr -d ' ')"
          echo "export BUNDLER_VERSION=$BUNDLER_VERSION" >> $BASH_ENV
          gem install bundler --version $BUNDLER_VERSION
    - run: 'bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3'
    - run:
        name: copy test.yml.sample to test.yml
        command: cp config/test.yml.sample config/test.yml
    - run:
        name: Precompile and clean assets
        command: bundle exec rake assets:precompile assets:clean
    # Save dependency cache
    - save_cache:
        key: v1-dep-{{ checksum "Gemfile.lock" }}-{{ epoch }}
        paths:
        - vendor/bundle
        - public/assets
    - run:
        name: Audit bundle for known security vulnerabilities
        command: bundle exec bundle-audit check --update
    - run:
        name: Setup Database
        command: bundle exec ruby ~/sayway/setup_test_db.rb
    - run:
        name: Migrate Database
        command: bundle exec rake db:citus:migrate
    - run:
        name: Run tests
        command: bundle exec rails test -f
    # By default, running "rails test" won't run system tests.
    - run:
        name: Run system tests
        command: bundle exec rails test:system
    # Save test results
    - store_test_results:
        path: /tmp/circleci-test-results
    # Save artifacts
    - store_artifacts:
        path: /tmp/circleci-artifacts
    - store_artifacts:
        path: /tmp/circleci-test-results
  build-and-push-image:
    working_directory: ~/say-way/
    parallelism: 1
    shell: /bin/bash --login
    executor: aws-ecr/default
    steps:
      - checkout
      - run:
          name: Pull latest core images for cache
          command: |
            $(aws ecr get-login --no-include-email --region $AWS_REGION)
            docker pull "${AWS_ECR_ACCOUNT_URL}/core:latest"
      - docker/build:
          image: core
          registry: "${AWS_ECR_ACCOUNT_URL}"
          tag: "latest,${CIRCLE_SHA1}"
          cache_from: "${AWS_ECR_ACCOUNT_URL}/core:latest"
      - aws-ecr/push-image:
          repo: core
          tag: "latest,${CIRCLE_SHA1}"
  deploy-production:
    working_directory: ~/say-way/
    parallelism: 1
    shell: /bin/bash --login
    executor: deploy/default
    steps:
      - kubernetes/install-kubectl:
          kubectl-version: v1.22.0
      - rollbar/notify_deploy_started:
          environment: report
      - deploy/update-image:
          resource-name: deployment/core-web
          template-file-path: core-web-pod.yml
          cluster-name: report
          environment: report
          template-repository: git@github.com:say-way/sw-k8s.git
          replicas: 3
      - deploy/update-image:
          resource-name: deployment/core-worker
          template-file-path: core-worker-pod.yml
          cluster-name: report
          environment: report
          template-repository: git@github.com:say-way/sw-k8s.git
          replicas: 4
      - deploy/update-image:
          resource-name: deployment/core-worker-batch
          template-file-path: core-worker-batch-pod.yml
          cluster-name: report
          environment: report
          template-repository: git@github.com:say-way/sw-k8s.git
          replicas: 1
      - rollbar/notify_deploy_finished:
          deploy_id: "${ROLLBAR_DEPLOY_ID}"
          status: succeeded
  deploy-demo:
    working_directory: ~/say-way/
    parallelism: 1
    shell: /bin/bash --login
    executor: deploy/default
    steps:
      - kubernetes/install-kubectl:
          kubectl-version: v1.22.0
      - rollbar/notify_deploy_started:
          environment: demo
      - deploy/update-image:
          resource-name: deployment/core-web
          template-file-path: core-web-pod.yml
          cluster-name: demo
          environment: demo
          template-repository: git@github.com:say-way/sw-k8s.git
          replicas: 2
      - deploy/update-image:
          resource-name: deployment/core-worker
          template-file-path: core-worker-pod.yml
          cluster-name: demo
          environment: demo
          template-repository: git@github.com:say-way/sw-k8s.git
          replicas: 1
      - deploy/update-image:
          resource-name: deployment/core-worker-batch
          template-file-path: core-worker-batch-pod.yml
          cluster-name: demo
          environment: demo
          template-repository: git@github.com:say-way/sw-k8s.git
          replicas: 1
      - rollbar/notify_deploy_finished:
          deploy_id: "${ROLLBAR_DEPLOY_ID}"
          status: succeeded
workflows:
  version: 2.1
  build-n-test:
    jobs:
      - test:
          filters:
            branches:
              ignore: master
  build-approve-deploy:
    jobs:
      - build-and-push-image:
          context: Core
          filters:
            branches:
              only: master
      - approve-report-deploy:
          type: approval
          requires:
            - build-and-push-image
      - approve-demo-deploy:
          type: approval
          requires:
            - build-and-push-image
      - deploy-production:
          context: Core
          requires:
            - approve-report-deploy
      - deploy-demo:
          context: Core
          requires:
            - approve-demo-deploy
Pav K.
  • 2,548
  • 2
  • 19
  • 29
yass
  • 829
  • 1
  • 3
  • 6
  • Hi yass welcome to SO. Please make use of the [extensive search feature](https://stackoverflow.com/search?q=%5Bkubernetes%5D+%22invalid+apiVersion%22+%22exec+plugin%22) to get the most benefit out of your stay in the stack exchange network. Good luck – mdaniel May 05 '22 at 14:47
  • If the same error is on local machine, check ~/.kube/ config with context authorisation or simply delete the whole dir if you want to start fresh. – Vladimir Vukanac Jun 02 '22 at 00:08

23 Answers23

122

There is an issue in aws-cli. It is already fixed.


In my case, updating aws-cli + updating the ~/.kube/config helped.

  1. Update aws-cli (following the documentation)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
  1. Update the kube configuration
mv ~/.kube/config ~/.kube/config.bk
aws eks update-kubeconfig --region ${AWS_REGION}  --name ${EKS_CLUSTER_NAME}
Pav K.
  • 2,548
  • 2
  • 19
  • 29
44

We HAVE a fix here: https://github.com/aws/aws-cli/issues/6920#issuecomment-1119926885

Update the aws-cli (aws cli v1) to the version with the fix:

pip3 install awscli --upgrade --user

For aws cli v2 see this.
After that, don't forget to rewrite the kube-config with:

aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}

This command should update the kube apiVersion to v1beta1

bigLucas
  • 604
  • 5
  • 8
  • 2
    By the way that `update-kubeconfig` updates `~/.kube/config` so it's just **local** and doesn't affect the remote servers. – Dorian Jun 01 '22 at 07:34
  • The `aws eks...` line produces: "aws: error: argument --region: expected one argument" – sh37211 Jun 25 '22 at 20:43
  • 1
    Perhaps your REGION variable is undefined in your environment, you can try specifying the `--region us-east-1` for instance. – bigLucas Jun 28 '22 at 17:43
27

In my case, changing apiVersion to v1beta1 in the kube configuration file helped:

apiVersion: client.authentication.k8s.io/v1beta1
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Kiran Thorat
  • 371
  • 2
  • 2
5

There is a glitch with the very latest version of kubectl. For now, you can follow these steps to get rid of the issue:

  1. curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
  2. chmod +x ./kubectl
  3. sudo mv ./kubectl /usr/local/bin/kubectl
  4. sudo kubectl version
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
AKSHAY KADAM
  • 129
  • 8
5

The simplest solution: (it appears here but in complicated words..)

Open your kube config file and replace all alpha instances with beta. (Editors with find&replace are recommended: Atom, Sublime, etc..).

Example with Nano:

nano  ~/.kube/config

Or with Atom:

atom ~/.kube/config

Then you should search for the alpha instances and replace them with beta and save the file.

Yoni
  • 51
  • 1
  • 2
3

There is a problem with the latest kubectl and the aws-cli: https://github.com/aws/aws-cli/issues/6920

Rob Cannon
  • 1,714
  • 3
  • 12
  • 14
  • 1
    Can you summarise in your answer? Incl. version information, dates, subsequent developments, etc. (But ***without*** "Edit:", "Update:", or similar - the answer should appear as if it was written today.) – Peter Mortensen Jun 26 '22 at 20:30
3

An alternative is to update the AWS cli. It worked for me.

The rest of the instructions are from the answer provided by bigLucas.

Update the aws-cli (aws cli v2) to the latest version:

winget install Amazon.AWSCLI

After that, don't forget to rewrite the kube-config with:

aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}

This command should update the kube apiVersion to v1beta1.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Chris Harris
  • 219
  • 2
  • 6
3

I changed the alpha1 value to the beta1 value, and it’s working for me under the configuration file.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
  • What do you mean by *"under the configuration file"*? Preferably, please respond by [editing (changing) your answer](https://stackoverflow.com/posts/72815127/edit), not here in comments (***without*** "Edit:", "Update:", or similar - the answer should appear as if it was written today). – Peter Mortensen Jun 30 '22 at 13:44
3

I was facing the same issue for solution, please follow the below setups:

  1. take backup existing config file mv ~/.kube/config ~/.kube/config.bk

  2. run below command:

aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
  1. then open the config ~/.kube/config file in any text editor, update v1apiVersion1 to v1beta1 and then try again.
Hi computer
  • 946
  • 4
  • 8
  • 19
2

Using kubectl 1.21.9 fixed it for me, with asdf:

asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9

And I would recommend having a .tools-versions file with:

kubectl 1.21.9
Dorian
  • 7,749
  • 4
  • 38
  • 57
2
  1. Open ~/.kube/config
  2. Search for the user within the cluster you have a problem with and replace the client.authentication.k8s.io/v1alpha1 with client.authentication.k8s.io/v1beta1
Peter Kracik
  • 33
  • 1
  • 4
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Jul 22 '22 at 09:08
1

Try updating your awscli (AWS Command Line Interface) version.

For Mac, it's brew upgrade awscli (Homebrew).

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
anna
  • 65
  • 6
1

Try upgrading the AWS Command Line Interface:

Steps

  1. curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
  2. sudo installer -pkg ./AWSCLIV2.pkg -target

You can use other ways from the AWS documentation: Installing or updating the latest version of the AWS CLI

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
1

I got the same problem: EKS version 1.22 kubectl works, and its version: v1.22.15-eks-fb459a0 helm version is 3.9+, when I execute helm ls -n $namespace I got the error

Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

from here: it is helm version issue. so I use the command

curl -L https://git.io/get_helm.sh | bash -s -- --version v3.8.2

downgraded the helm version. helm works

Z.Liu
  • 403
  • 2
  • 9
0

In case of Windows, first delete the configuration file in $HOME/.kube folder.

Then run the aws eks update-kubeconfig --name command as suggested by bigLucas.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
vahbuna
  • 31
  • 3
0

I just simplified the workaround by updating awscli to awscli-v2, but that also requires Python and pip to be upgraded. It requires minimum Python 3.6 and pip3.

apt install python3-pip -y && pip3 install awscli --upgrade --user

And then update the cluster configuration with awscli

aws eks update-kubeconfig --region <regionname> --name <ClusterName>

Output

Added new context arn:aws:eks:us-east-1:XXXXXXXXXXX:cluster/mycluster to /home/dev/.kube/config

Then check the connectivity with cluster

dev@ip-10-100-100-6:~$ kubectl get node
NAME                             STATUS   ROLES    AGE    VERSION
ip-X-XX-XX-XXX.ec2.internal   Ready    <none>   148m   v1.21.5-eks-9017834
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Mansur Ul Hasan
  • 2,898
  • 27
  • 24
0

I was able to fix this by running on a MacBook Pro M1 chip (Homebrew):

brew upgrade awscli
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
0

fixed for me only change in kubeconfig -- >v1alpha1 to v1beta1

0

You can run the below command on your host machine where kubectl and aws-cli exist:

export KUBERNETES_EXEC_INFO='{"apiVersion":"client.authentication.k8s.io/v1beta1"}'

If using ‘sudo’ while running kubectl commands, then export this as root user.

KungFuPanda
  • 55
  • 1
  • 6
0
apt install python3-pip -y
pip3 install awscli --upgrade --user
Tyler2P
  • 2,324
  • 26
  • 22
  • 31
  • 3
    Remember that Stack Overflow isn't just intended to solve the immediate problem, but also to help future readers find solutions to similar problems, which requires understanding the underlying code. This is especially important for members of our community who are beginners, and not familiar with the syntax. Given that, **can you [edit] your answer to include an explanation of what you're doing** and why you believe it is the best approach? – Tyler2P Nov 06 '22 at 19:18
0

try diffrent version of kubectl , if kubernetes version is a 1.23 then we can use (one near) kubectl version 1.23,1.24,1.22

  • 1
    Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Dec 23 '22 at 20:23
0

For me upgrading aws-iam-authenticator from v0.5.5 to v0.5.9 solved the issue

AAber
  • 1,562
  • 10
  • 14
0

just use this is the change required only:-

v1alpha1 to v1beta1

on kube/config update this

Kumar Pankaj Dubey
  • 1,541
  • 3
  • 17
  • 17