2

For at least some of the ingress controllers out there, 2 variables must be supplied: POD_NAME and POD_NAMESPACE. The nginx ingress controller makes sure to inject these 2 variables in the container(s) as seen here (link for Azure deployment templates), HAProxy is using it (as shown here) and probably others are doing it as well.

I get why these 2 values are needed. For the nginx ingress controller the value of the POD_NAMESPACE variable is used to potentially restrict the ingress resource objects the controller will be watching out for to just the namespace it's deployed in, through the --watch-namespace parameter (the Helm chart showing this in action is here). As for POD_NAME, not having this will cause some errors in the ingress internal code (the function here) which in turn will probably prevent the ingress from running without the variables set.

Couldn't the ingress controller obtain this information automatically, based on the permissions it has to run (after all it can watch for changes at the Kubernetes level, so one would assume it's "powerful" enough to see its own pod name and the namespace where it was deployed)? In other words, can't the ingress controller do a sort of "whoami" and get its own data? Or is this perhaps a common pattern used across Kubernetes?

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
Mihai Albert
  • 1,288
  • 1
  • 12
  • 27
  • What do you need it for? Do you want to create something and have a problem? Have you set up any environment? If so, in what way? – Mikołaj Głodziak Aug 12 '21 at 11:51
  • @MikołajGłodziak I'm not running into any problem; actually all the nginx ingress controllers I have are running as expected. I've been digging however deeper into how they work, and I couldn't answer the simple question "If the ingress controllers have so many roles/rights delegated to them, how come they can't just pick up their own pod name/namespace and have to rely on injected variables?". I think it's highlighting something I don't understand, and would like to address it – Mihai Albert Aug 12 '21 at 13:39
  • Is the confusion from what each layer is? Pods are collections of containers. Containers are like virtual environments for an applications. The application runs the software (ingress controller + load balancer). In my view, a pod does not have intelligence. It's a wrapper around one or many container environments, which have applications running in them. Ultimately, the application needs to learn the pod name / namespace, and environment variables are the traditional way to pass that info down through the layers. Maybe it could learn this on its own, but self reflection has its own complexity. – Nick Ramirez Aug 20 '21 at 17:36
  • The ingress controller running inside the container talks to the Kubernetes API as described here https://docs.nginx.com/nginx-ingress-controller/intro/how-nginx-ingress-controller-works/#the-ingress-controller-pod. It has sufficient access rights to the cluster given the RoleBindings and ClusterRoleBindings granting access to the service account powering it. Now think 'curl ifconfig.me' to obtain the outbound public IP a client is perceived as using, and apply this analogy to our use case so the nginx binary is asking K8s API "hey, what is the pod from within this connection is coming from?" – Mihai Albert Aug 20 '21 at 18:01

1 Answers1

2

It is done by design, a community that develops this functionality as it approaches the subject.

When the environment variable is started, the variables are known. Kubernetes provides these variables and pod can use them when runs.

Of course, if you have a better idea to solve this, you can suggest it in the official thread on github.

However, bare in mind that this potential solution:

Couldn't the ingress controller obtain this information automatically, based on the permissions it has to run (after all it can watch for changes at the Kubernetes level, so one would assume it's "powerful" enough to see its own pod name and the namespace where it was deployed)? In other words, can't the ingress controller do a sort of "whoami" and get its own data? Or is this perhaps a common pattern used across Kubernetes?

will require an extra step. Firstly, pod will have to have additional privileges, secondly, when it is started, it will not have these variables yet.

Mikołaj Głodziak
  • 4,775
  • 7
  • 28