As the kubernetes.io docs state about a Service
of type LoadBalancer
:
On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service's
.status.loadBalancer
field.
On AWS Elastic Kubernetes Service (EKS) a an AWS Load Balancer is provisioned that load balances network traffic (see AWS docs & the example project on GitHub provisioning a EKS cluster with Pulumi). Assuming we have a Deployment
ready with the selector app=tekton-dashboard
(it's the default Tekton dashboard you can deploy as stated in the docs), a Service
of type LoadBalancer
defined in tekton-dashboard-service.yml
could look like this:
apiVersion: v1
kind: Service
metadata:
name: tekton-dashboard-external-svc-manual
spec:
selector:
app: tekton-dashboard
ports:
- protocol: TCP
port: 80
targetPort: 9097
type: LoadBalancer
If we create the Service in our cluster with kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines
, the AWS ELB get's created automatically:
There's only one problem: The .status.loadBalancer
field is populated with the ingress[0].hostname
field asynchronously and is therefore not available immediately. We can check this, if we run the following commands together:
kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines && \
kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}'
The output will be an empty field:
{}%
So if we want to run this setup in a CI pipeline for example (e.g. GitHub Actions, see the example project's workflow provision.yml
), we need to somehow wait until the .status.loadBalancer
field got populated with the AWS ELB's hostname. How can we achieve this using kubectl wait
?