31

I am trying to deploy my application into Rancher managed kubernetes cluster RKE. I have created pipeline in gitlab using auto devops. But when the helm chart is trying to deploy I get this error. Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused

Below is my deploy script:

deploy:
  stage: deploy
  image: cdrx/rancher-gitlab-deploy
  only:
    - master
  script:
    - apk --no-cache add curl
    - curl -L https://get.helm.sh/helm-v3.3.0-rc.1-linux-amd64.tar.gz > helm.tar.gz
    - tar -zxvf helm.tar.gz
    - mv linux-amd64/helm /usr/local/bin/helm
    - helm install mychart ./mychart

Could someone help me in resolving this issue

merla
  • 489
  • 1
  • 5
  • 12
  • what is the api url you gave in Gitlab autodevops setup ? – Tarun Khosla Jul 24 '20 at 04:56
  • 1
    sounds like it doesn't know how to connect to your RKE cluster – Rico Jul 24 '20 at 05:55
  • Hello, have you managed to solve your issue with the help of Rico's comment? – Dawid Kruk Jul 27 '20 at 16:44
  • I already configured the integration with RKE cluster by adding the api url,token of the cluster in gitlab project settings..not sure what else has to be configured.still getting same error – merla Jul 28 '20 at 14:07
  • The issue is fixed after explicitly adding environment variable to deploy script. Thanks for the help – merla Aug 28 '20 at 03:12

10 Answers10

46

I've bumped into the same issue when installing rancher on K3s, setting KUBECONFIG helped.

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
VasekCh
  • 984
  • 8
  • 8
  • In my case changing `KUBECONFIG` was not enough and I had to uninstall and reinstall K3s – nedstark179 Jun 04 '21 at 12:12
  • 1
    Works with k3s on debian 10. Great one! – Rob Ert Jun 18 '21 at 00:22
  • 5
    Similar to the answer, in the case that a helm chart should be installed on k3s, the --kubeconfig parameter should be used for the helm command, specifying the location of the k3s configuration – atsag Oct 01 '21 at 14:44
27

This anwser solved the issue for me. If you're not running on microk8s, like me, omit the prefix

[microk8s] kubectl config view --raw > ~/.kube/config
donbunkito
  • 488
  • 5
  • 9
  • I get this error -sh: 45: cannot create /home/smartdev/.kube/config: Directory nonexistent – Jeffrey Nyauke Apr 13 '21 at 17:01
  • 1
    Thanks, that works for Ubuntu 21.04 with microk8s hoped for simple explanation... – moshe beeri May 14 '21 at 17:24
  • thanks. k3s user here. for those wondering, this worked for me because i didnt have the file `~/.kube/config`. this answer creates it, with the content of `kubectl config view --raw`. if you already have this file, might be something else – RASG Feb 05 '22 at 14:00
3

If the following command doesn't work

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

you can try to use the root user to install k3s & helm.

brance
  • 1,157
  • 9
  • 22
Flands
  • 31
  • 2
3

I just had the same issue. So this happens because you are non root user,

sudo su

then execute export and all other commands

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.7.1
WillX0r
  • 53
  • 6
2

Some good answers here specifying how to fix the problem. Here's a passage from the excellent O'Reilly book "Learning Helm" that gives insight into why this is error is happening:

"Working with Kubernetes Clusters Helm interacts directly with the Kubernetes API server. For that reason, Helm needs to be able to connect to a Kubernetes cluster. Helm attempts to do this automatically by reading the same configuration files used by kubectl (the main Kubernetes command-line client).

Helm will try to find this information by reading the environment variable $KUBECONFIG. If that is not set, it will look in the same default locations that kubectl looks in (for example, $HOME/.kube/config on UNIX, Linux, and macOS).

You can also override these settings with environment variables (HELM_KUBECONTEXT) and command-line flags (--kube-context). You can see a list of environment variables and flags by running helm help. The Helm maintainers recommend using kubectl to manage your Kubernetes credentials and letting Helm merely autodetect these settings. If you have not yet installed kubectl, the best place to start is with the official Kubernetes installation documentation."

-Learning Helm by Matt Butcher, Matt Farina, and Josh Dolitsky (O’Reilly). Copyright 2021 Matt Butcher, Innovating Tomorrow, and Blood Orange, 978-1-492-08365-8.

Keith Lyons
  • 564
  • 7
  • 14
0

I had a similar error. A bit of background context: I was working with multiple cluster and by mistake, I edited the .kube/config manually. This resulted in an invalid configuration with the context.cluster, context.user parameters missing. I filled in those values manually and it worked again.

Before fixing, the config file had a portion like this:

contexts:
- context:
    cluster: ""
    user: ""
  name: ""

I updated it as

contexts:
- context:
    cluster: <NAME-OF-THE-CLUSTER>
    user: <USERNAME>
  name: <CONTEXT-NAME>

To update the values, I used values from kubectl config get-contexts (I had the output of above command in terminal history which helped in updating).

mcgusty
  • 1,354
  • 15
  • 21
0

If your microk8s setup is running on Windows 11 and you are calling Helm from a local CMD- or Powershell-Console, running microk8s kubectl config view --raw > %USERPROFILE%/.kube/config as per the documentation, will add the following entry to your kube config:

clusters:
- cluster:
    certificate-authority-data: ...
    server: https://127.0.0.1:16443
  name: microk8s-cluster

From a Windows point of view, there is no listener for port 16443 on localhost. Instead, use the IP address returned from the following command, as your server address:

microk8s kubectl describe node | FIND "InternalIP"

Once you update your kube config file like this, your Helm calls should work as well.

Ahatius
  • 4,777
  • 11
  • 49
  • 79
0

I found this page when looking for a problem: Kubernetes cluster unreachable. In my case I came across the error:

Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "https://37...

And it turned out that I just forgotten to run minikube cluster :)

minikube start
sprutex
  • 980
  • 12
  • 12
0

make sure you are using the latest versions. I faced the same problem. I solved it by updating docker.

-3

Update helm

helm repo update

Check

kubectl get all

Hrishabh Gupta
  • 1,912
  • 15
  • 13