33

I've found a documentation about how to configure your NginX ingress controller using ConfigMap: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/

Unfortunately I've no idea and couldn't find it anywhere how to load that ConfigMap from my Ingress controller.

My ingress controller:

helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress

My config map:

kind: ConfigMap
apiVersion: v1
metadata:
  name: ingress-configmap
data:
  proxy-read-timeout: "86400s"
  client-max-body-size: "2g"
  use-http2: "false"

My ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  tls:
    - hosts:
        - my.endpoint.net
      secretName: ingress-tls
  rules:
    - host: my.endpoint.net
      http:
        paths:
          - path: /
            backend:
              serviceName: web
              servicePort: 443
          - path: /api
            backend:
              serviceName: api
              servicePort: 443

How do I make my Ingress to load the configuration from the ConfigMap?

Tom Raganowicz
  • 2,169
  • 5
  • 27
  • 41

14 Answers14

27

I've managed to display what YAML gets executed by Helm using the: --dry-run --debug options at the end of helm install command. Then I've noticed that there controller is executed with the: --configmap={namespace-where-the-nginx-ingress-is-deployed}/{name-of-the-helm-chart}-nginx-ingress-controller. In order to load your ConfigMap you need to override it with your own (check out the namespace).

kind: ConfigMap
apiVersion: v1
metadata:
  name: {name-of-the-helm-chart}-nginx-ingress-controller
  namespace: {namespace-where-the-nginx-ingress-is-deployed}
data:
  proxy-read-timeout: "86400"
  proxy-body-size: "2g"
  use-http2: "false"

The list of config properties can be found here.

Tom Raganowicz
  • 2,169
  • 5
  • 27
  • 41
  • is --configmap in a yaml somewhere? how do you see what --configmap is on a running deployment? – red888 Apr 22 '19 at 22:00
  • 4
    `--configmap` is not a recognized flag for helm. While I have no trouble creating a config map and nginx ingress, I am still clueless how to link the two together. The ingress is not picking up the properties from the config map. – ScottSummers May 13 '19 at 08:36
  • 1
    Don't use the: `--configmap` option, name your configmap in a same way as Helm internally calls the configmap. If you read my answer again you will be able to spot it. – Tom Raganowicz May 14 '19 at 09:24
  • 1
    The name of the config map that is applied is `{name-of-the-helm-chart}-ingress-nginx-ingress-controller` and will be picked up from the namespace where the chart is deployed. Adding a comment just in case the edits in the answer are rejected. Thanks a lot for your help @NeverEndingQueue! Cheers!!! – ScottSummers May 21 '19 at 07:04
  • 1
    Glad I could help. Thanks for your edit, I've adjusted is slightly. I think it's not: `{name-of-the-helm-chart}-ingress-nginx-ingress-controller`, but: `{name-of-the-helm-chart}-nginx-ingress-controller`. Is that right? – Tom Raganowicz May 21 '19 at 11:06
  • It seems it's actually {namespace}/{release-name}-nginx-ingress-controller. i tried a dry run also, but without a release name so a random one was chosen (i think.) and I grepped `- --configmap=default/washing-ladybird-nginx-ingress-controller` – Yehuda Makarov Jul 14 '19 at 21:33
  • @NeverEndingQueue As of now, the name is actually `{name-of-the-helm-chart}-controller`. – rubik Dec 23 '19 at 22:23
  • @rubik {name-of-the-helm-chart}-nginx-ingress-controller works for me. – NFern Feb 15 '20 at 08:28
  • I had to create a new config file since a config with that name did not already exist. To debug the config, you can also exec into the nginx pod and read the nginx.conf file. – NFern Feb 15 '20 at 08:29
  • @NeverEndingQueue, thanks for the answer. Can you please provide more info on automating the deployment flow? For example, in this case, I presume the configmap should already been deployed before nginx ingress helm chart is deployed? Or can nginx ingress controller can pickup the newly added configmap with the right name after getting deployed of nginx ingress? – rishi Nov 18 '20 at 00:25
  • 1
    While I tried to add a new configmap on existing ingress, I noticed that was an existing configmap named 'nginx-ingress-ingress-nginx-controller' without any data, so have to add a new one with different name and edit the deployment to include this configmap as well? – rishi Nov 18 '20 at 00:40
17

One can pass config mag properties at the time of installation too:

helm install stable/nginx-ingress --name nginx-ingress --set controller.config.use-forwarded-headers='"true"'

NOTE: for non-string values had to use single quotes around double quotes to get it working.

adnan kamili
  • 8,967
  • 7
  • 65
  • 125
  • Thanks for this valid answer too. But wonder how do I pass on the http-snippet as a parameter to the helm chart? For example, "http-snippet": "proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=mycache:32m use_temp_path=off max_size=4g inactive=1h;". Thanks – rishi Nov 18 '20 at 00:44
5

If you used helm install to install the ingress-nginx, if no explicit value for which ConfigMap the nginx controller should look at was passed, the default value seems like it is {namespace}/{release-name}-nginx-ingress-controller. This is generated by https://github.com/helm/charts/blob/1e074fc79d0f2ee085ea75bf9bacca9115633fa9/stable/nginx-ingress/templates/controller-deployment.yaml#L67. (See similar if it's a dead link).

To verify for yourself, try to find your command that you installed the ingress-nginx chart with, and add --dry-run --debug to the command. This will show you the yaml files generated by Tiller to be applied to the cluster. The line # Source: nginx-ingress/templates/controller-deployment.yaml begins the controller deployment which has an arg of --configmap=. The value of this arg is what needs to be the name of the ConfigMap for the controller to sense, and use to update its own .conf file. This could be passed explicitly, but if it is not, it will have a default value.

If a ConfigMap is created with the RIGHT name, the controller's logs will show that it picked up the configuration change and reloaded itself.

This can be verified with kubectl logs <pod-name-of-controller> -n <namespace-arg-if-not-in-default-namespace>. My log messages contained the text Configuration changes detected, backend reload required. These log messages will not be present if the ConfigMap name was wrong.

I believe the official documentation for this is unnecessarily lacking, but maybe I'm incorrect? I will try to submit a PR with these details. Someone who knows more should help flesh them out so people don't need to stumble on this unnecessarily.

Cheers, thanks for your post.

Yehuda Makarov
  • 576
  • 6
  • 14
4

If you want to give your own configuration while deploying nginx-ingress-controller, you can have a wrapper Helm chart over the original nginx-ingress Helm chart and provide your own values.yaml which can have custom configuration.

Using Helm 3 here.

Create a chart:

$ helm create custom-nginx
$ tree custom-nginx

So my chart structure looks like this:

custom-nginx/
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

There are a few extra things here. Specifically, I don't need the complete templates/ directory and its contents, so I'll just remove those:

$ rm custom-nginx/templates/*
$ rmdir custom-nginx/templates

Now, the chart structure should look like this:

custom-nginx/
├── Chart.yaml
├── charts
└── values.yaml

Since, we've to include the original nginx-ingress chart as a dependency, my Chart.yaml looks like this:

 $ cat custom-nginx/Chart.yaml 
apiVersion: v2
name: custom-nginx
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.39.1

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 0.32.0

dependencies:
- name: nginx-ingress
  version: 1.39.1
  repository: https://kubernetes-charts.storage.googleapis.com/ 

Here, appVersion is the nginx-controller docker image version and version matches with the nginx-ingress chart version that I am using.

The only thing left is to provide your custom configuration. Here is an stripped down version of my custom configuration:

$ cat custom-nginx/values.yaml 
# Default values for custom-nginx.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

nginx-ingress:
  controller:
    ingressClass: internal-nginx
    replicaCount: 1
    service:
      externalTrafficPolicy: Local
    publishService:
      enabled: true
    autoscaling:
      enabled: true
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: "80"
      targetMemoryUtilizationPercentage: "80"
    resources:
      requests:
        cpu: 1
        memory: 2Gi
      limits:
        cpu: 1
        memory : 2Gi
    metrics:
      enabled: true
    config:
      compute-full-forwarded-for: "true"

We can check the keys that are available to use as configuration (config section in values.yaml) in https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/

And the rest of the configuration can be found here: https://github.com/helm/charts/tree/master/stable/nginx-ingress#configuration

Once configurations are set, just download the dependency of your chart:

$ helm dependency update <path/to/chart>

It's a good practice to do basic checks on your chart before deploying it:

$ helm lint <path/to/chart>
$ helm install --debug --dry-run --namespace <namespace> <release-name> <path/to/chart>

Then deploy your chart (which will deploy your nginx-ingress-controller with your own custom configurations).

Also, since you've a chart now, you can upgrade and rollback your chart.

Shubham
  • 2,847
  • 4
  • 24
  • 37
  • There is no need to provide a wrapper chart just to provide and configuration to the ingress nginx helm chart - this does not answer the question but rather "how to centralize default nginx configuration" - to the question. using values.config when deploying the chart "non wrapping" is the answer (and given above already). I understand that your way is something people might also look for, but it takes 95% of your answer, while it was not asked for. Overcomplicates it too :) IMHO – Eugen Mayer Jan 16 '22 at 10:42
4

When installing the chart through terraform, the configuration values can be set as shown below:

resource "helm_release" "ingress_nginx" {
  name       = "nginx"
  repository = "https://kubernetes.github.io/ingress-nginx/"
  chart      = "ingress-nginx"

  set {
    name  = "version"
    value = "v4.0.2"
  }
  set {
    name  = "controller.config.proxy-read-timeout"
    value = "86400s"
  }
  set {
    name  = "controller.config.client-max-body-size"
    value = "2g"
  }
  set {
    name  = "controller.config.use-http2"
    value = "false"
  }
}
mibollma
  • 14,959
  • 6
  • 52
  • 69
2

Just to confirm @NeverEndingQueue answer above, the name of the config map is present in the nginx-controller pod spec itself, so if you inspect the yaml of the nginx-controller pod: kubectl get po release-name-nginx-ingress-controller-random-sequence -o yaml, under spec.containers, you will find something like:

  - args:
    - /nginx-ingress-controller
    - --default-backend-service=default/release-name-nginx-ingress-default-backend
    - --election-id=ingress-controller-leader
    - --ingress-class=nginx
    - --configmap=default/release-name-nginx-ingress-controller

For example here, a config map named release-name-nginx-ingress-controller in the namespace default needs to be created.

Once done, you can verify if the changes have taken place by checking the logs. Normally, you will see something like:

I1116 10:35:45.174127       6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"default", Name:"release-name-nginx-ingress-controller", UID:"76819abf-4df0-41e3-a3fe-25445e754f32", APIVersion:"v1", ResourceVersion:"62559702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap default/release-name-nginx-ingress-controller
I1116 10:35:45.184627       6 controller.go:141] Configuration changes detected, backend reload required.
I1116 10:35:45.396920       6 controller.go:157] Backend successfully reloaded.
zakaria amine
  • 3,412
  • 2
  • 20
  • 35
2

I managed to update the "large-client-header-buffers" in the nginx via configmap. Here are the steps I have followed..

  1. Find the configmap name in the nginx ingress controller pod describition
kubectl -n utility describe pods/test-nginx-ingress-controller-584dd58494-d8fqr |grep configmap
      --configmap=test-namespace/test-nginx-ingress-controller

Note: In my case, the namespace is "test-namespace" and the configmap name is "test-nginx-ingress-controller"

  1. Create a configmap yaml
cat << EOF > test-nginx-ingress-controller-configmap.yaml 

kind: ConfigMap
apiVersion: v1
metadata:
  name: test-nginx-ingress-controller
  namespace: test-namespace
data:
  large-client-header-buffers: "4 16k"
EOF

Note: Please replace the namespace and configmap name as per finding in the step 1

  1. Deploy the configmap yaml
kubectl apply -f test-nginx-ingress-controller-configmap.yaml

Then you will see the change is updated to nginx controller pod after mins

i.g.
kubectl -n test-namespace exec -it test-nginx-ingress-controller-584dd58494-d8fqr -- cat /etc/nginx/nginx.conf|grep large
    large_client_header_buffers     4 16k;

Chance
  • 405
  • 4
  • 8
2

Based on the NeverEndingQueue's answer I want to provide an update for Kubernetes v1.23 / Helm 3

This is my installation command + --dry-run --debug part: helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --dry-run --debug

This is the part we need from the generated output of the command above:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  ...
spec:
  ...
  template:
    ...
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          ...
          args:
            - /nginx-ingress-controller
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
            - --...
            - --configmap=${POD_NAMESPACE}/ingress-nginx-controller
            - --...
            ....

We need this part: --configmap=${POD_NAMESPACE}/ingress-nginx-controller.

As you can see, name of ConfigMap must be ingress-nginx-controller and namespace must be the one you use during chart installation (ie {POD_NAMESPACE}, in my example about this is --namespace ingress-nginx).

# nginx-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  map-hash-bucket-size: "128"

Then run kubectl apply -f nginx-config.yaml to apply ConfigMap and nginx's pod(s) will be auto-reloaded with updated config.


To check, that nginx config has been updated, find name of nginx's pod (you can use any one, if you have few nodes): kubectl get pods -n ingress-nginx (or kubectl get pods -A)

and then check config: kubectl exec -it ingress-nginx-controller-{generatedByKubernetesId} -n ingress-nginx cat /etc/nginx/nginx.conf

UPDATE:

The correct name (ie name: ingress-nginx-controller) is shown in the official docs. Conclusion: no need to reinvent the wheel.

TitanFighter
  • 4,582
  • 3
  • 45
  • 73
1

When you apply ConfigMap configuration with needful key-value data, Ingress controller picks up this information and insert it to the nested nginx-ingress-controller Pod's original configuration file /etc/nginx/nginx.conf, therefore it's easy afterwards to verify whether ConfigMap's values have been successfully reflected or not, by checking actual nginx.conf inside the corresponded Pod.

You can also check logs from the relevant nginx-ingress-controller Pod in order to check whether ConfigMap data already reloaded to the backend nginx.conf, or if not to investigate the reason.

Nick_Kh
  • 5,089
  • 2
  • 10
  • 16
  • 1
    Thanks. Yes the `ConfigMap` change nicely affects the `nginx.conf` inside. If someone wants to check whether NginX config was affected on the outside (without going into pod), you can set either: `server_tokens off` or `server_tokens on` and notice how whether NginX advertises itself in the HTTP headers. – Tom Raganowicz Feb 26 '19 at 20:35
  • what kind of logs should i see in the controller if a configmap was detected? because it seems like i followed everything here and i'm not sure if my .conf is updating – Yehuda Makarov Jul 14 '19 at 16:11
  • `kubectl exec -ndefault nginx-ingress-controller-b545558d8-829dz -- cat /etc/nginx/nginx.conf | grep tokens` for example. – mpen Jun 19 '20 at 03:27
1

An easier way of doing this is just modifying the values that's deployed through helm. The values needed to be changed to enter to ConfigMap are now in controller.config.entries. Show latest values with: helm show values nginx-stable/nginx-ingress and look for the format on the version you are running.

I had tons of issues with this since all references online said controller.config, until I checked with the command above.

After you've entered the values upgrade with:

helm upgrade -f <PATH_TO_FILE>.yaml <NAME> nginx-stable/nginx-ingress
JohanLejdung
  • 353
  • 1
  • 4
  • 18
  • Just to be sure, for the current, today version of the helm chart, `controller.config` is correct, no need to nest behind `https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-configmap.yaml#L26` - source is https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-configmap.yaml#L26 - version 4.0.13 – Eugen Mayer Jan 16 '22 at 10:48
1

Using enable-underscores-in-headers=true worked for me not enable-underscores-in-headers='"true"'

helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace ingress-basic
--set controller.config.enable-underscores-in-headers=true

Rohit Yadav
  • 2,252
  • 16
  • 18
0

What you have is an ingress yaml and not an Ingress controller deployment yaml , Ingress Controller is the Pod that actually does the work and usually is an nginx container itself. An example of such a configuration can be found here in the documentation you shared.

UPDATE

Using that example provided , you can also use following way to load config into nginx using config map

     volumeMounts:
      - name: nginx-config
        mountPath: /etc/nginx/nginx.conf
       subPath: nginx.conf
    volumes:
     - name: nginx-config
       configMap:
       name: nginx-config 

nginx-config contains your nginx configuration as part of config map

fatcook
  • 946
  • 4
  • 16
  • As you've pointed out the custom template is one way of configuring NginX controller: [custom-template](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/) but the ConfigMap with it's own key convention here: [configmap](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/) is another way. Please note that `configmap` provides configuration directly in `data:`. I am looking not how to load custom template from ConfigMap, but how to load config from ConfigMap directly. – Tom Raganowicz Feb 26 '19 at 14:44
0

I read the above answers but could not make it work.

What worked for me was the following:

release_name=tcp-udp-ic

# add the helm repo from NginX and update the chart
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update

echo "- Installing -${release_name}- into cluster ..."

#delete the config map if already exists
kubectl delete cm tcp-udp-ic-cm

helm del --purge ${release_name}
helm upgrade --install ${release_name} \
--set controller.image.tag=1.6.0 \
--set controller.config.name=tcp-udp-ic-cm \
nginx-stable/nginx-ingress --version 0.4.0 #--dry-run --debug

# update the /etc/nginx/nginx.conf file with my attributes, via the config map
kubectl apply -f tcp-udp-ic-cm.yaml

and the tcp-udp-ic-cm.yaml is :

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-udp-ic-cm
  namespace: default
data:
  worker-connections : "10000"

Essentially I need to deploy with helm the release and set the name of the config-map that is going to use. Helm creates the config-map but empty. Then I apply the config-map file in order to update the config-map resource with my values. This sequence is the only one i could make work.

Kostas Demiris
  • 3,415
  • 8
  • 47
  • 85
0
  1. The nginx ingress controller may cause issues with forwarding. While we were able to get it working with nginx, via X-Forwarded-Proto etc., but it was a bit complicated and convoluted.
  2. Moving to haproxy instead resolved this problem. As well, make sure you are interfacing with the ingress controller over https or that may cause issues with keycloak.

Keycloak v18 with --proxy edge

  annotations:
        kubernetes.io/ingress.class: haproxy
  ...
Carter
  • 1,184
  • 11
  • 5