My health checks fail with the following setup.
nginx.conf
user root;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name subdomain.domain.com
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}
server {
listen 80;
auth_basic off;
}
server {
listen 2222;
auth_basic off;
location /healthz {
return 200;
}
}
}
DOCKERFILE
FROM nginx:alpine
COPY index.html /usr/share/nginx/html/index.html
VOLUME /usr/share/nginx/html
COPY /server/nginx.conf /etc/nginx/
COPY /server/htpasswd /etc/nginx/.htpasswd
CMD ["nginx", "-g", "daemon off;"]
EXPOSE 80
EXPOSE 2222
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-namespace
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: gcr.io/GOOGLE_CLOUD_PROJECT/my-app
ports:
- containerPort: 80
- containerPort: 2222
livenessProbe:
httpGet:
path: /healthz
port: 2222
readinessProbe:
httpGet:
path: /healthz
port: 2222
It definitely works when I delete the "server_name" row in nginx.conf and delete the second server block. Could this be an issue with ingress/load balancer, since I do not know how long it takes to update (I experienced a healthy pod go unhealthy after a few minutes yesterday). Running it on Google Kubernetes Engine (GKE) with Google's own ingress controller (not NGINX ingress!)
What am I doing wrong?