0

I have a local kubernetes cluster up and running using k3s. It works like a charm so far.

On it I'm running a custom Docker registry from which I want to pull images for other deployments.

The registry is exposed to the host by means of a NodePort service. Internally it has port 5000, externally it's on port 31320.

I can push docker images to the registry from the host by tagging them as myhostname:31320/myimage:latest. This works great too.

Now I want to use this image in a basic Job deployment. I'm using the whole tag myhostname:31320/myimage:latest as container image entry like this:

apiVersion: batch/v1
kind: Job
metadata:
  name: hello-world
spec:
  template:
    metadata:
      name: hello-world-pod
    spec:
      containers:
        - name: hello-world
          image: myhostname:31320/myimage:latest
      restartPolicy: Never

Unfortunately, I keep getting a 400 BadRequest error stating: image can't be pulled. If I try using the internal service name of the registry and the internal port instead, like in private-registry:5000/myimage:latest, I'm getting the same error.

I suppose I cannot use private-registry:5000/myimage:latest because that's just not the tag of the image. I cannot push the image to private-registry:5000/myimage:latest because the host private-registry is only known inside the cluster and the port 5000 is not exposed to the host.

So... I'm stuck. What am I going to do about this? How do I get to push images from the host to the registry and allow them to be pulled from inside the cluster?

Hendrik Wiese
  • 2,010
  • 3
  • 22
  • 49

1 Answers1

0

Kubernetes has a rich documentation on how to implement multiple registries to allow further deployments/pods to access to public or even private registries, to do so you can create an image pull secret k8s ressource (docs), you can either create it by running this command:

kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword>

or by deploying this resource in your cluster:

apiVersion: v1
kind: Secret
metadata:
  name: myregistrykey
  namespace: awesomeapps
data:
  # Make sure the you convert the whole file to base64!
  # cat registry.json | base64 -d
  .dockerconfigjson: <registry.json>
type: kubernetes.io/dockerconfigjson

registry.json example

{
    "auths": {
        "your.private.registry.example.com": {
            "username": "janedoe",
            "password": "xxxxxxxxxxx",
            "email": "jdoe@example.com",
            "auth": "c3R...zE2"
        }
    }
}

And now you can simply attache this imagePullSecret resource you can attache it to your deployment:

apiVersion: batch/v1
kind: Job
metadata:
  name: hello-world
spec:
  template:
    metadata:
      name: hello-world-pod
    spec:
      imagePullSecrets:
        - name: regcred
      containers:
        - name: hello-world
          image: myhostname:31320/myimage:latest
      restartPolicy: Never  

PS

You might also consider adding your registry in docker daemon as insecure registry if you encounter other issues.

you can check this SO question

Affes Salem
  • 1,303
  • 10
  • 26
  • Thank you for your response. Since my local registry is first and foremost for testing and learning purposes, it does not require a login, has no credentials set. I'm not even sure if whoever (I'm still a Kubernetes noob) pulls the image can reach the registry on the network. – Hendrik Wiese Oct 27 '22 at 04:27
  • Depends on where you deployed your image registry and your cluster, if they can connect to eachother whether on the same network, or your registry is accessible online with a public IP you can access it or else you will need to set some inbound and outbound rules to provide connection between both k8s cluster and image registry. I hope that helped. – Affes Salem Oct 27 '22 at 14:22
  • 1
    I've double checked by using a shell inside the cluster. I can ping the registry, so network is OK. It seems like the reason for the error is that the registry does not transmit a valid certificate (self-signed). Now I can either try and configure kubernetes to ignore the cert error, or I can use cert-manager to issue a valid, signed cert. For testing purposes I'd like to take the first route and turn off cert verification. But option B is also tempting because it also carries some learning. – Hendrik Wiese Oct 28 '22 at 19:49