I'm flabbergasted.
I have a staging and production environment. Both environments have the same deployments, services, ingress, firewall rules, and both serve a 200
on /
.
However, after turning on the staging environment and provisioning the same ingress, the staging service fails with Some backend services are in UNKNOWN state
. Production is still live.
Both the frontend and backend pods are ready on GKE. I've manually tested the health checks and they pass when I visit /
.
I see nothing in the logs or gcp docs pointing in the right direction. What could I have possibly broken?
ingress.yaml
:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "STATIC-IP"
spec:
backend:
serviceName: frontend
servicePort: 8080
tls:
- hosts:
- <DOMAIN>
secretName: staging-tls
rules:
- host: <DOMAIN>
http:
paths:
- path: /*
backend:
serviceName: frontend
servicePort: 8080
- path: /backend/*
backend:
serviceName: backend
servicePort: 8080
frontend.yaml
:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend
namespace: default
spec:
ports:
- nodePort: 30664
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: frontend
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
generation: 15
labels:
app: frontend
name: frontend
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
selector:
matchLabels:
app: frontend
minReadySeconds: 5
template:
metadata:
labels:
app: frontend
spec:
containers:
- image: <our-image>
name: frontend
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 3
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 3