71

In a container inside a pod, how can I run a command using kubectl? For example, if i need to do something like this inside a container:

kubectl get pods

I have tried this : In my dockerfile, I have these commands :

RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN sudo mv ./kubectl /usr/local/bin/kubectl

EDIT : I was trying the OSX file, I have corrected it to the linux binary file. (corrected by @svenwltr

While creating the docker file, this is successful, but when I run the kubectl get pods inside a container,

kubectl get pods

I get this error :

The connection to the server : was refused - did you specify the right host or port?

When I was deploying locally, I was encountering this error if my docker-machine was not running, but inside a container how can a docker-machine be running?

Locally, I get around this error by running the following commands: (dev is the name of the docker-machine)

docker-machine env dev
eval $(docker-machine env dev)

Can someone please tell me what is it that I need to do?

codeforester
  • 39,467
  • 16
  • 112
  • 140
Dreams
  • 5,854
  • 9
  • 48
  • 71
  • I am confused. Do you run that container in Kubernetes or in Docker machine? – svenwltr Mar 08 '17 at 07:40
  • @svenwltr - I am running kubernetes locally on minikube, and it suggests to use a docker deamon in the kubernetes VM. – Dreams Mar 09 '17 at 04:53

7 Answers7

47

I would use kubernetes api, you just need to install curl, instead of kubectl and the rest is restful.

curl http://localhost:8080/api/v1/namespaces/default/pods

Im running above command on one of my apiservers. Change the localhost to apiserver ip address/dns name.

Depending on your configuration you may need to use ssl or provide client certificate.

In order to find api endpoints, you can use --v=8 with kubectl.

example:

kubectl get pods --v=8

Resources:

Kubernetes API documentation

Update for RBAC:

I assume you already configured rbac, created a service account for your pod and run using it. This service account should have list permissions on pods in required namespace. In order to do that, you need to create a role and role binding for that service account.

Every container in a cluster is populated with a token that can be used for authenticating to the API server. To verify, Inside the container run:

cat /var/run/secrets/kubernetes.io/serviceaccount/token

To make request to apiserver, inside the container run:

curl -ik \
     -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
     https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods
Farhad Farahi
  • 35,528
  • 7
  • 73
  • 70
  • I tried your suggested answer, but it gives me another error - curl: (7) Failed to connect to 192.168.99.100 port 8080: Connection refused. Although am able to do a kubectl get pods. Also, when I run the command netstat -atn to check the open ports, there are no ports shown open on the particular ip. – Dreams Mar 08 '17 at 05:39
  • @Tarun 192.168.99.100 is the api server ip address right? btw I just tested it on my setup, its working fine. – Farhad Farahi Mar 08 '17 at 06:29
  • Yes, I am running kubernetes on minikube. Its the minikube ip. I had made a mistake, I was trying port 8080. But, then when I tried a kubectl config view, it shows port 8443. Also, when I checked the api endpoints, it shows "https" for mine. Is that the same for you? When I try with https, I get a ssl error(curl: (60) SSL certificate problem: Invalid certificate chain). I am trying to resolve it, will update as soon as i resolve the error. – Dreams Mar 08 '17 at 06:48
  • @Tarun Im on production environment, try `--insecure` with your curl, Do you have your `ca.pem` and `client.pem` and `client-key.pem`? if so you can try : `curl https://ip:8443/api/v1/namespaces/default/pods --cacert ca.pem --insecure --key client-key.pem --cert client.pem` – Farhad Farahi Mar 08 '17 at 07:30
  • I tried --insecure but it does not resolve the problem, Ya, I did not have .pem files, will try adding them and update asap. Again, thanks a lot :) – Dreams Mar 09 '17 at 04:55
  • While that answer allows you to query API it doesn't solve the problem. See https://stackoverflow.com/a/42651673/2335253 for real answer. To play around with API I'd suggest using 'kubectl proxy' which you could run on you local machine though – tamerlaha Mar 18 '21 at 19:22
40

Bit late to the party here, but this is my two cents:

I've found using kubectl within a container much easier than calling the cluster's api

(Why? Auto authentication!)

Say you're deploying a Node.js project that needs kubectl usage.

  1. Download & Build kubectl inside the container
  2. Build your application, copying kubectl to your container
  3. Voila! kubectl provides a rich cli for managing your kubernetes cluster

Helpful documentation

--- EDITS ---

After working with kubectl in my cluster pods, I found a more effective way to authenticate pods to be able to make k8s API calls. This method provides stricter authentication.

  1. Create a ServiceAccount for your pod, and configure your pod to use said account. k8s Service Account docs
  2. Configure a RoleBinding or ClusterRoleBinding to allow services to have the authorization to communicate with the k8s API. k8s Role Binding docs
  3. Call the API directly, or use a the k8s-client to manage API calls for you. I HIGHLY recommend using the client, it has automatic configuration for pods which removes the authentication token step required with normal requests.

When you're done, you will have the following: ServiceAccount, ClusterRoleBinding, Deployment (your pods)

Feel free to comment if you need some clearer direction, I'll try to help out as much as I can :)

All-in-on example

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-101
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: k8s-101
    spec:
      serviceAccountName: k8s-101-role
      containers:
      - name: k8s-101
        imagePullPolicy: Always
        image: salathielgenese/k8s-101
        ports:
        - name: app
          containerPort: 3000
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: k8s-101-role
subjects:
- kind: ServiceAccount
  name: k8s-101-role
  namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8s-101-role

The salathielgenese/k8s-101 image contains kubectl. So one can just log into a pod container & execute kubectl as if he was running it on k8s host: kubectl exec -it pod-container-id -- kubectl get pods

Salathiel Genese
  • 1,639
  • 2
  • 21
  • 37
mster
  • 850
  • 9
  • 10
  • Can you descripte the part on how to connect the api a bit more in detail? – Berndinox Sep 04 '18 at 19:47
  • 1
    @Berndinox sure! All of the `kubectl` commands can be input over command line. Using Node's fork child process, you can execute these commands. Make sure to initialize the `kubectl proxy` before executing other commands. To build the docker image with `kubectl`: https://pastebin.com/6a8kp6aR – mster Sep 07 '18 at 18:54
  • 1
    As a k8s beginner, those directives above sounds magic but I found some help to translate it into some configuration - https://kubernetes.slack.com/archives/C09NXKJKA/p1555058986037900?thread_ts=1555049081.017200&cid=C09NXKJKA. So I'll edit to provide an example. – Salathiel Genese Apr 12 '19 at 09:25
  • @mster Could you help me with the Dockerfile? I want to install kubectl in the container I'm getting the error: `failed to solve with frontend dockerfile.v0: failed to create LLB definition: circular dependency detected on stage: kubectl`. This is my Dockerfile: https://pastebin.com/Lqb1py64 – Lucas Scheepers Nov 15 '21 at 18:22
  • 1
    Never ever give Cluster-Admin if it's not needed. Apart from that, the solution seems viable. – Dorian Gaensslen Nov 23 '21 at 14:54
20

First Question

/usr/local/bin/kubectl: cannot execute binary file

It looks like you downloaded the OSX binary for kubectl. When running in Docker you probably need the Linux one:

https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

Second Question

If you run kubectl in a proper configured Kubernetes cluster, it should be able to connect to the apiserver.

kubectl basically uses this code to find the apiserver and authenticate: github.com/kubernetes/client-go/rest.InClusterConfig

This means:

  • The host and port of the apiserver are stored in the environment variables KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT.
  • The access token is mounted to var/run/secrets/kubernetes.io/serviceaccount/token.
  • The server certificate is mounted to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt.

This is all data kubectl needs to know to connect to the apiserver.

Some thoughts why this might won't work:

  • The container doesn't run in Kubernetes.
    • It's not enough to use the same Docker host; the container needs to run as part of a pod definition.
  • The access is restricted by using an authorization plugin (which is not the default).
  • The service account credentials are overwritten by the pod definition (spec.serviceAccountName).
svenwltr
  • 17,002
  • 12
  • 56
  • 68
  • Yes, thank you for pointing it out. I made the necessary changes for linux. But, I still get another error, will update the question. :) – Dreams Mar 08 '17 at 06:50
  • I updated the answer. Not sure if I understood the question correctly. – svenwltr Mar 08 '17 at 08:04
  • 1
    Haven't been able to try this, will update asap. But, thanks for your response, it definitely gave me a much better understanding of the issue :) – Dreams Mar 09 '17 at 04:54
  • interestingly I have found the linux kubectl (as per the link you mentioned) to run on big sur... – Remigius Stalder Mar 28 '21 at 18:46
9

I just faced this concept again. It is absolutely possible but let's don't give "cluster-admin privileges in with ClusterRole that container for security reasons.

Let's say we want to deploy a pod in the cluster with access to view and create pods only in a specific namespace in the cluster. In this case, a ServiceAccount could look like:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: spinupcontainers
subjects:
- kind: ServiceAccount
  name: spinupcontainers
  namespace: <YOUR_NAMESPACE>
roleRef:
  kind: Role
  name: spinupcontainers
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: spinupcontainers
  # "namespace" omitted if was ClusterRoles because are not namespaced
  namespace: <YOUR_NAMESPACE>
  labels:
    k8s-app: <YOUR_APP_LABEL>
rules:
#
# Give here only the privileges you need
#
- apiGroups: [""]
  resources:
  - pods
  verbs:
  - create
  - update
  - patch
  - delete
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: spinupcontainers
  namespace: <MY_NAMESPACE>
  labels:
    k8s-app: <MY_APP_LABEL>
---

If you apply the service account in your deployment with serviceAccountName: spinupcontainers in the container specs you don't need to mount any additional volumes secrets or attach manually certifications. kubectl client will get the required tokens from /var/run/secrets/kubernetes.io/serviceaccount. Then you can test if is working with something like:

$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n <YOUR_NAMESPACE>
NAME.        READY   STATUS    RESTARTS   AGE
pod1-0       1/1     Running   0          6d17h
pod2-0       1/1     Running   0          6d16h
pod3-0       1/1     Running   0          6d17h
pod3-2       1/1     Running   0          67s

or permission denied:

$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:spinupcontainers" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1

Tested on:

$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl versionClient Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Nick G
  • 146
  • 2
  • 3
2

Combined from all above. This did the trick for me. Retrieving all pods from within a container.

curl --insecure -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"  https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/api/v1/namespaces/default/pods

See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#-strong-read-operations-pod-v1-core-strong- for the REST API.

userM1433372
  • 5,345
  • 35
  • 38
0

To run kubectl commands inside a container. It would take 3 steps

  1. Install kubectl
RUN printf '[kubernetes] \nname = Kubernetes\nbaseurl = https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64\nenabled = 1\ngpgcheck = 1\nrepo_gpgcheck=1\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg' \
  | tee /etc/yum.repos.d/kubernetes.repo \
  && cat  /etc/yum.repos.d/kubernetes.repo \
  && yum install -y kubectl

  1. Create ClusterAdminRole Binding role for service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mysa-admin-sa
  namespace: mysa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: mysa-admin-sa
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: mysa-admin-sa
    namespace: mysa

3- Example of cronjob configuration

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: scaleup
  namespace: myapp
spec:
  schedule: "00 5 * * 1-5"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: mysa-admin-sa
          restartPolicy: OnFailure
          containers:
          - name: scale-up
            image: myimage:test
            imagePullPolicy: Always
            command: ["/bin/sh"]
            args: ["-c", "mykubcmd_script >>/mylog.log"]
Amit Singh
  • 31
  • 2
  • How you configured the dockerfile for docker image to kubectl etc... – Dani May 18 '22 at 09:43
  • you would need to have kubectl client installed on the docker image. For e.g. for centos based OS you need to add this line in Dockerfile. `RUN printf '[kubernetes] \nname = Kubernetes\nbaseurl = https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64\nenabled = 1\ngpgcheck = 1\nrepo_gpgcheck=1\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg' \ | tee /etc/yum.repos.d/kubernetes.repo \ && cat /etc/yum.repos.d/kubernetes.repo \ && yum install -y kubectl` – Amit Singh May 19 '22 at 10:49
-5
  1. To run a command inside a pod with single container use below command

kubectl --exec -it <pod-name> -- <command-name>

  1. To run a command inside a pod with multiple containers use below command

kubectl --exec -it <pod-name> -c <container-name> -- <command-name>

Aditya Bhuyan
  • 328
  • 6
  • 10