2

I'm creating an agent service that accepts network calls and can trigger commands in any other container in the same pod. This, of course, isn't the usual use case of pods, but I know some CI tools do something similar, such as Jenkins and it's Kubernetes plugin.

Currently, I have it working using kubectl in the agent container and have it running kubectl exec <pod> -c <container> -- <command> and it works fine. But it seems like a big opportunity for vulnerabilities.

In order for the agent to have kubectl exec access, it needs to have privilege on pod/exec which gives it access to all pods in the same namespace.

rules:
- apiGroups: [""]
  resources: ["pods", "pods/exec"]
  verbs: ["get", "list", "watch", "create"]

If there aren't any better ways to approach this, I'll just bake the exec commands into my agent in such a way that it'll only accept calls to the same pod.

But my big concern is around executing unknown code from the agent and it getting access to more than it should. In the Jenkins example, if someone has a pipeline that tests their code and they were malicious and included a test which actually uses the kubernetes-client library and calls out to the other pods in the namespace, how would you prevent that while still enabling the container to container communication?

I'd appreciate any suggestions!

Rico
  • 58,485
  • 12
  • 111
  • 141
dinkleberg
  • 21
  • 2
  • Any more context on what you try to do? Do CI/CD pipeline on Kubernetes? (Tekton is good on that) Or usually, the way two containers communicate is over network ports, but here it sounds more like you want to treat your containers as old virtual machines? Or is it for some kind of debugging that can't be done with logging, tracing or something? – Jonas Aug 23 '20 at 18:03
  • Hey @Jonas, I'm building a _very lightweight_ CI component for my application. At it's most basic it would be just pulling src + creating an image. I was trying to avoid tekton or argo pipelines, but perhaps you're right, that might be the easiest. – dinkleberg Aug 23 '20 at 22:52
  • Tekton does this in a clever way, so that, all containers in the Pod is executed in a sequence... so that you first can pull, then build and then push image... or what you decide that your Task or Pipeline does. – Jonas Aug 24 '20 at 11:49

1 Answers1

3

Sounds like you want to execute commands in the pod but you don't want to hit the kube-apiserver. Also, looks like your application is listening for a trigger (on some sort of even based broker or application) and executing the command.

My suggestion would be to just have the application 'shell out' run the command itself instead of just having kubectl run it on the same pod with exec. You didn't specify what language your application is written in but most common languages have a library to do an exec system call or manage processes. i.e Golang, Python, etc.

✌️

Rico
  • 58,485
  • 12
  • 111
  • 141
  • Hey @Rico, for some more context, I've got a pod with an agent container which I can make networks calls to and have it "shell out". But the challenge I'm trying to overcome is that I want to be able to add containers with different dependencies (e.g. python, nodejs) similar to how Jenkins does it with pod templates. The agent knows how to receive the commands and then it'll run them on the appropriate containers. Since the dependency containers could be anything, I can't bake in a listener to each of them to have them shell out. – dinkleberg Aug 23 '20 at 22:57
  • Can you use a common container image with all your dependencies? What exactly are you talking about with Jenkins that you can specify dependencies? Do pod templates don't start from a container image too? Or you want to start on a separate pod? But the regular jenkins plugin calls the kube-apiserver too – Rico Aug 23 '20 at 23:54