1

I have a working installation with kubernetes 1.1.1 running on debian

I also have a private registry working nice running in v2 ..

I am facing a weird problem.

defining a pod in master

apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker-registry.hiberus.com:5000/debian:ssh
imagePullSecrets:
- name: myregistrykey

I also have the secret on my master myregistrykey kubernetes.io/dockercfg 1 44m

and my config.json is made this way

{
"auths": {
"https://docker-registry.hiberus.com:5000": {
"auth": "anNhdXJhOmpzYXVyYQ==",
"email": "jsaura@heraldo.es"
}
}
}

and so I did the base64 and created my secret.

simple as hell

on my node the image gets pulled without any problem

docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
docker-registry.hiberus.com:5000/debian ssh 3b332951c107 29 minutes ago 183.3 MB
golang 1.4 2819d1d84442 7 days ago 562.7 MB
debian latest 91bac885982d 8 days ago 125.1 MB
gcr.io/google_containers/pause 0.8.0 2c40b0526b63 7 months ago 241.7 kB

but my container does not start

./kubectl describe pod nginx
Name: nginx
Namespace: default
Image(s): docker-registry.hiberus.com:5000/debian:ssh
Node: 192.168.29.122/192.168.29.122
Start Time: Wed, 18 Nov 2015 17:08:53 +0100
Labels: app=nginx
Status: Running
Reason:
Message:
IP: 172.17.0.2
Replication Controllers: 
Containers:
nginx:
Container ID: docker://3e55ab118a3e5d01d3c58361abb1b23483d41be06741ce747d4c20f5abfeb15f
Image: docker-registry.hiberus.com:5000/debian:ssh
Image ID: docker://3b332951c1070ba2d7a3bb439787a8169fe503ed8984bcefd0d6c273d22d4370
State: Waiting
Reason: CrashLoopBackOff
Last Termination State: Terminated
Reason: Error
Exit Code: 0
Started: Wed, 18 Nov 2015 17:08:59 +0100
Finished: Wed, 18 Nov 2015 17:08:59 +0100
Ready: False
Restart Count: 2
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-ha0i4:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-ha0i4
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
16s 16s 1 {kubelet 192.168.29.122} implicitly required container POD Created Created with docker id 4a063be27162
16s 16s 1 {kubelet 192.168.29.122} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine
16s 16s 1 {kubelet 192.168.29.122} implicitly required container POD Started Started with docker id 4a063be27162
16s 16s 1 {kubelet 192.168.29.122} spec.containers{nginx} Pulling Pulling image "docker-registry.hiberus.com:5000/debian:ssh"
15s 15s 1 {scheduler } Scheduled Successfully assigned nginx to 192.168.29.122
11s 11s 1 {kubelet 192.168.29.122} spec.containers{nginx} Created Created with docker id 36df2dc8b999
11s 11s 1 {kubelet 192.168.29.122} spec.containers{nginx} Pulled Successfully pulled image "docker-registry.hiberus.com:5000/debian:ssh"
11s 11s 1 {kubelet 192.168.29.122} spec.containers{nginx} Started Started with docker id 36df2dc8b999
10s 10s 1 {kubelet 192.168.29.122} spec.containers{nginx} Pulled Container image "docker-registry.hiberus.com:5000/debian:ssh" already present on machine
10s 10s 1 {kubelet 192.168.29.122} spec.containers{nginx} Created Created with docker id 3e55ab118a3e
10s 10s 1 {kubelet 192.168.29.122} spec.containers{nginx} Started Started with docker id 3e55ab118a3e
5s 5s 1 {kubelet 192.168.29.122} spec.containers{nginx} Backoff Back-off restarting failed docker container

it loops internally trying to start but it never does

the weird thing is that if y do a run command on my node manually the container starts without any problem, but using the pod pulls the image but never starts ..

am I doing something wrong?

if I use a public image for my pod it starts without any problem .. this only happens to me when using private images ..

I have also moved from debian to ubuntu, no luck same problem

I have also linked the secret to de default service account, still no luck

cloned last git version, compiled, no luck ..

It is clear for me that the problem is using private registry, but I have applied and followed all info I have read and still no luck.

Amit Yadav
  • 4,422
  • 5
  • 34
  • 79
  • 1
    It's not clear to me the private registery is the problem as the events show that the images were pulled successfully. You may want to check if your pod yaml is correct. Also try `kubectl logs -c -p` to see the logs of the last terminated container. – Yu-Ju Hong Nov 24 '15 at 23:45
  • thanks for the answer i though it was private registry's fault because the same yaml changing the image for a public one works :/ . there're no logs for the container i think but i will check it tomorrow. i eventually see it running under docker images command on my slave but it dies fast .. – Julio Saura Nov 29 '15 at 07:12
  • this is my pod definition
    apiVersion: v1
    kind: Pod metadata: name: prueba labels: app: prueba spec: containers: - name: prueba image: docker-registry.hiberus.com:5000/debian:ssh imagePullSecrets: - name: clave
    – Julio Saura Nov 29 '15 at 07:15
  • weird i can describe the pod with kubectl describe pod but i can't access logs
    root@kubernetes-master:/opt/kubernetes# ./kubectl logs prueba -c 26fdaa2e1b60 Error from server: the server could not find the requested resource ( Pod prueba) root@kubernetes-master:/opt/kubernetes# ./kubectl logs prueba Error from server: Internal error occurred: Pod "prueba" in namespace "default": container "prueba" is in waiting state.
    – Julio Saura Nov 29 '15 at 07:28
  • check the image you are pulling on Docker directly to see if it fails fast. If it does you first need to figure out why. After that, as noted, the image is pulled correctly so this is not the issue. – MrE Dec 16 '15 at 19:57
  • hello image works great pulling directly from the registry and when using kubernetes it gets pulled and it starts but instantly dies and i don't see any reason for it :( thanks! – Julio Saura Jan 07 '16 at 08:30

1 Answers1

0

A docker container could exit if it's main process has exit. Could you share container logs ?

  1. if you do docker ps -a you should see all running and exited containers
  2. Run docker container logs container_id

Also try running your container in interactive and daemon mode and see if it fails only in daemon mode.

Running in daemon mode -

docker run -d -t Image_name

Running in interactive mode -

docker run -it Image_name

for interactive daemon mode docker run -idt Image_name

refer - Why docker container exits immediately