62

I am fairly new to the Google Cloud platform and Docker and set-up a cluster of nodes, made a Dockerfile that copies a repo and runs a Clojure REPL on a public port. I can connect to it from my IDE and play around with my code, awesome!

That REPL should however probably tunneled through SSH, but here is where my problem starts. I can't find a suitable place to SSH into for making changes to the repo that Docker runs the REPL on:

  • The exposed IP just exposes the REPL service (correct kubernetes term?) and does not allow me to SSH in.
  • Neither does the cluster master endpoint, it gives me a public key error even though I've followed the Adding or removing SSH keys for all of the instances in your project part here.

I would like to edit the source files via SSH but I would need to access the docker code repo. I don't know how to proceed.

I understand this isn't exactly a typical way to deploy applications so I am not even sure it's possible to have multiple nodes work with a modified docker codebase (do the nodes share the JVM somehow?).

Concretely my question is how do I SSH into the docker container to access the codebase?

bbs
  • 1,874
  • 1
  • 17
  • 17

7 Answers7

89

For more recent Kubernetes versions the shell command should be separated by the --:

kubectl exec -it <POD NAME> -c <CONTAINER NAME> -- bash

Please note that bash needs to be availalble for execution inside of the container. For different OS flavours you might need to use /bin/sh, /bin/bash (or others) instead.

The command format for Kubernetes 1.5.0:

kubectl exec -it <POD NAME> -c <CONTAINER NAME> bash
Sergey Shcherbakov
  • 4,534
  • 4
  • 40
  • 65
  • 7
    Works great. You may need to add -n if you get 'Error from server (NotFound): pods "blabla" not found' – rdeboo Oct 11 '17 at 07:45
  • 3
    That works, except I had to use `sh` instead of `bash`. – jacob Jul 29 '19 at 20:36
  • 1
    That depends on which shell do you have available inside of your container. – Sergey Shcherbakov Jul 30 '19 at 07:32
  • 1
    It seems that this option have been deprecated. I recommend using this instead: `kubectl exec --stdin --tty -- /bin/bash` – spotHound May 08 '21 at 17:24
  • @spotHound what? I’ve just used it yesterday, could you please share the link – Sergey Shcherbakov May 08 '21 at 17:27
  • Hi @SergeyShcherbakov, I've tested this on coursera qwikilabs: Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.16-gke.2100", GitCommit:"36d0b0a39224fef7a40df3d2bc61dfd96c8c7f6a", GitTreeState:"clean", BuildDate:"2021-03-16T09:15:29Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"} – spotHound May 09 '21 at 06:58
  • student_03_ce538e058c58@cloudshell:~ (qwiklabs-gcp-03-f98bc8bc9855)$ kubectl exec -it nginx-b498c7d77-545dv /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. – spotHound May 09 '21 at 06:59
  • Oh, it is only about separating the shell command with — from kubectl exec, I’ll update the answer – Sergey Shcherbakov May 09 '21 at 08:25
28

List instances:

gcloud compute instances list

SSH into instance:

gcloud compute ssh <instance_name> --zone=<instance_zone>

In the instance, list the running processes and their container IDs:

sudo docker ps -a

Attach to a container:

sudo docker exec -it <container_id> bash  
Yoshua Wuyts
  • 3,926
  • 1
  • 20
  • 16
  • Thanks! I've got a `gcr.io/cloudsql-docker/gce-proxy` that keeps failing to start, and I'm trying to diagnose. I used `docker run -it sh`, but I need to see with my mounts. Any idea what the volume args for `docker run` look like for secrets when on the node? – kross Oct 06 '17 at 20:07
  • SSH does not work for me: $ gcloud compute ssh --zone us-west1-b gke-xxx-default-pool-yyy-3q77 -- ps ssh: connect to host 35.100.100.10 port 22: Connection timed out – Ark-kun Jun 20 '20 at 05:50
19

The best way to attach to the container through exec command.

Attach to docker running container

docker exec -it  YOUR_CONTAINER_ID bash

Attach to Kubernetes running container.

kubectl exec -it  YOUR_CONTAINER/POD_NAME bash

Attach to Kubernetes running container in a given namespace.

kubectl exec -it  YOUR_CONTAINER/POD_NAME -n YOUR_NAMESPACE bash
Kasun Siyambalapitiya
  • 3,956
  • 8
  • 38
  • 58
Nitin
  • 10,101
  • 4
  • 17
  • 34
17

If the pod is in your current namespace, get the list of pods:

kubectl get pods

Pick a pod. Execute a bash session on it:

kubectl exec -it [POD_NAME] -- /bin/bash

Alternatively, find the pod you want in a different namespace:

kubectl get pods --all-namespaces

Pick a pod and execute a bash session on it:

kubectl exec -it [POD_NAME] --namespace [NAMESPACE] -- /bin/bash
John McGehee
  • 9,117
  • 9
  • 42
  • 50
7

The existing answers are great, just wanted to contribute a really convenient command that lists all pods and containers, so you can choose one to plug into the kubectl exec command.

kubectl get pods -o=custom-columns=POD:.metadata.name,CONTAINERS:.spec.containers[*].name

Gives output like this

POD     CONTAINERS
pod-1   service-1,service-2
pod-2   service-1,service-2
pod-3   service-3
pod-4   service-3

Then ssh into any of those containers by just plugging in the names

kubectl exec -it POD -c CONTAINER /bin/sh

e.g. service-2 in pod-2

kubectl exec -it pod-2 -c service-2 /bin/sh

NOTE add -n namespace to any of the above commands to specify a namespace if necessary.

davnicwil
  • 28,487
  • 16
  • 107
  • 123
5

I can't find a suitable place to SSH into for making changes to the repo that Docker runs the REPL on

When you create a cluster, you provision a number of node VMs in your google cloud project. If you look at https://console.cloud.google.com/compute/instances you should see them and each one will have a External IP address which you will be able to ssh into. Then create an ssh tunnel to a node VM that forwards a local port to the pod IP address.

Note that if you are running multiple replicas of your clojure app, you must connect to each replica separately to update the app.

Robert Bailey
  • 17,866
  • 3
  • 50
  • 58
1

Based on your description, I believe you are trying to setup a kubernetes cloud based development workspace. So that you could SSH into the pod containing your codebase using the public IP address of the pod or node or cluster, and edit the code in the docker container/pod using your IDE from laptop or so.

If your endgoal is to get remote SSH access to your private Kubernetes cluster nodes or pods, then you have 2 options:

Option# 1: Install and run an OpenSSH server inside your docker container pod. SSH server listens on port 22 and you need to expose that to the outside network. Expose the pod target port 22 through a clusterPort or nodePort service using a Kubernetes service configuration as shown below.

Reference: https://kubernetes.io/docs/concepts/services-networking/service/

apiVersion: v1
kind: Service
metadata:
  name: my-ssh-service
spec:
  type: NodePort
  selector:
    app: MyApp
  ports:
    - port: 22
      targetPort: 22
      nodePort: 30022

Now you could SSH into your pod using the NodeIP (Public IP address of the worker node, say 34.100.0.1) and NodePort as shown below

ssh user@34.10.0.1 -p 30022

The only catch here is that you need to expose your worker node to the internet using a public IP address, so that you could access your pod from outside the network. It is not a security best practice to expose your node or cluster via a public IP to the internet as it increases the attack surface of your cloud.

Option #2: An alternate and better approach (from a security standpoint) would be to use a Kubernetes Cluster Remote SSH Access solution like SocketXP which doesn't require any public IP to be assigned to your nodes or cluster. You can retain your cluster as a private cluster. You can use IDE or something to SSH into your pod and access your codebase.

Reference: https://www.socketxp.com/docs/guide/kubernetes-pod-remote-ssh-access.html

Disclaimer: I'm the founder of SocketXP Kubernetes Remote Access solution. So I don't want to discuss my solution in detail here. You can go to the reference link above if you need the details and instructions to set it up.