0

Here is the scenario -

I have deployed kubernetes cluster and inside that I have created an application( say test) with "kubectl:1.20" image. I have created required clusterrole, rolebindings and sa for this test application to have the authorization for managing deployments. Now my requirement is that i should be able to fetch and patch the deployments which are running in the kubernetes cluster from inside the test pod as part of a cronjob.

When i am running the cronjob, i am getting the below mentioned error in the test pod "The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?"

I have passed the kubeconfig file of the cluster as configmap to the test pod.

below is the cronjob yml file -

kind: CronJob
apiVersion: batch/v1beta1
metadata:
  name: test
spec:
  schedule: "*/2 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: local-kubectl-scale-deploy
          containers:
          - name: mycron-container
            image: bitnami/kubectl:1.20
            imagePullPolicy: IfNotPresent
            env:
              - name: KUBECONFIG
                value: "/tmp/kubeconfig"
            command: [ "/bin/sh" ]
            args: [ "/var/httpd-init/script", "namespace" ]
            tty: true
            volumeMounts:
            - name: script
              mountPath: "/var/httpd-init/"
            - name: kubeconfig
              mountPath: "/tmp/"
          volumes:
          - name: script
            configMap:
              name: cronscript
              defaultMode: 0777
          - name: kubeconfig
            configMap:
              name: kubeconfig
          restartPolicy: OnFailure
          terminationGracePeriodSeconds: 0

  concurrencyPolicy: Replace

below is the kubeconfig file which i am passing as configmap inside the pod.

kind: ConfigMap
apiVersion: v1
metadata:
  name: kubeconfig
  labels:
    app: kubeconfig
data:
  kubeconfig: |
    apiVersion: v1
    kind: Config
    clusters:
    - name: default-cluster
      cluster:
        certificate-authority-data: xxxxxxxxxxx
        server: https://x.x.x.x:6443
    contexts:
    - name: default-context
      context:
        cluster: default-cluster
        namespace: default
        user: default-user
    current-context: default-context
    users:
    - name: default-user
      user:
        token: xxxxxx        

Am I missing some configuration from the pod to communicate to this x.x.x.x:6443 IP?

bunny
  • 83
  • 6
  • Which version of Kubernetes did you use and how did you set up the cluster? Did you use bare metal installation or some cloud provider? It is important to reproduce your problem. – Mikołaj Głodziak Jan 31 '22 at 13:13

1 Answers1

1

To diagnose the communication error to the address we would need a lot more detail from your network configuration, but instead we can fix the cronjob.

The proper way of achieving that is to set a service account with prober RBAC similar to https://stackoverflow.com/a/58378834/3930971

chicocvenancio
  • 629
  • 5
  • 14