25

Trying to figure out how to expose multiple TCP/UDP services using a single LoadBalancer on Kubernetes. Let's say the services are ftpsrv1.com and ftpsrv2.com each serving at port 21.

Here are the options that I can think of and their limitations :

  • One LB per svc: too expensive.
  • Nodeport : Want to use a port outside the 30000-32767 range.
  • K8s Ingress : does not support TCP or UDP services as of now.
  • Using Nginx Ingress controller : which again will be one on one mapping:
  • Found this custom implementation : But it doesn't seem to updated, last update was almost an year ago.

Any inputs will be greatly appreciated.

Ali
  • 1,521
  • 1
  • 10
  • 19

4 Answers4

63

It's actually possible to do it using NGINX Ingress.

Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY].

This guide is describing how it can be achieved using minikube but doing this on a on-premises kubernetes is different and requires a few more steps.

There is lack of documentation describing how it can be done on a non-minikube system and that's why I decided to go through all the steps here. This guide assumes you have a fresh cluster with no NGINX Ingress installed.

I'm using a GKE cluster and all commands are running from my Linux Workstation. It can be done on a Bare Metal K8S Cluster also.

Create sample application and service

Here we are going to create and application and it's service to expose it later using our ingress.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deployment
  namespace: default
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis
        imagePullPolicy: Always
        name: redis
        ports:
        - containerPort: 6379
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: redis-service
  namespace: default
spec:
  selector:
    app: redis
  type: ClusterIP
  ports:
    - name: tcp-port
      port: 6379
      targetPort: 6379
      protocol: TCP
---      
apiVersion: v1
kind: Service
metadata:
  name: redis-service2
  namespace: default
spec:
  selector:
    app: redis
  type: ClusterIP
  ports:
    - name: tcp-port
      port: 6380
      targetPort: 6379
      protocol: TCP      

Notice that we are creating 2 different services for the same application. This is only to work as a proof of concept. I wan't to show latter that many ports can be mapped using only one Ingress.

Installing NGINX Ingress using Helm:

Install helm 3:

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Add NGINX Ingress repo:

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

Install NGINX Ingress on kube-system namespace:

$ helm install -n kube-system ingress-nginx ingress-nginx/ingress-nginx

Preparing our new NGINX Ingress Controller Deployment

We have to add the following lines under spec.template.spec.containers.args:

        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services

So we have to edit using the following command:

$ kubectl edit deployments -n kube-system ingress-nginx-controller

And make it look like this:

...
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --publish-service=kube-system/ingress-nginx-controller
        - --election-id=ingress-controller-leader
        - --ingress-class=nginx
        - --configmap=kube-system/ingress-nginx-controller
        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
...

Create tcp/udp services Config Maps

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: kube-system
apiVersion: v1
kind: ConfigMap
metadata:
  name: udp-services
  namespace: kube-system

Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them every time you add a service:

$ kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
$ kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6380":"default/redis-service2:6380"}}'

Where:

  • 6379 : the port your service should listen to from outside the minikube virtual machine
  • default : the namespace that your service is installed in
  • redis-service : the name of the service

We can verify that our resource was patched with the following command:

$ kubectl get configmap tcp-services -n kube-system -o yaml

apiVersion: v1
data:
  "6379": default/redis-service:6379
  "6380": default/redis-service2:6380
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"tcp-services","namespace":"kube-system"}}
  creationTimestamp: "2020-04-27T14:40:41Z"
  name: tcp-services
  namespace: kube-system
  resourceVersion: "7437"
  selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
  uid: 11b01605-8895-11ea-b40b-42010a9a0050

The only value you need to validate is that there is a value under the data property that looks like this:

  "6379": default/redis-service:6379
  "6380": default/redis-service2:6380

Add ports to NGINX Ingress Controller Deployment

We need to patch our nginx ingress controller so that it is listening on ports 6379/6380 and can route traffic to your service.

spec:
  template:
    spec:
      containers:
      - name: controller
        ports:
         - containerPort: 6379
           hostPort: 6379
         - containerPort: 6380
           hostPort: 6380 

Create a file called nginx-ingress-controller-patch.yaml and paste the contents above.

Next apply the changes with the following command:

$ kubectl patch deployment ingress-nginx-controller -n kube-system --patch "$(cat nginx-ingress-controller-patch.yaml)"

Add ports to NGINX Ingress Controller Service

Differently from the solution presented for minikube, we have to patch our NGINX Ingress Controller Service as it is the responsible for exposing these ports.

spec:
  ports:
  - nodePort: 31100
    port: 6379
    name: redis
  - nodePort: 31101
    port: 6380
    name: redis2

Create a file called nginx-ingress-svc-controller-patch.yaml and paste the contents above.

Next apply the changes with the following command:

$ kubectl patch service ingress-nginx-controller -n kube-system --patch "$(cat nginx-ingress-svc-controller-patch.yaml)"

Check our service

$ kubectl get service -n kube-system ingress-nginx-controller
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                                                    AGE
ingress-nginx-controller   LoadBalancer   10.15.251.203   34.89.108.48   6379:31100/TCP,6380:31101/TCP,80:30752/TCP,443:30268/TCP   38m

Notice that our ingress-nginx-controller is listening to ports 6379/6380.

Test that you can reach your service with telnet via the following command:

$ telnet 34.89.108.48 6379

You should see the following output:

Trying 34.89.108.48...
Connected to 34.89.108.48.
Escape character is '^]'.

To exit telnet enter the Ctrl key and ] at the same time. Then type quit and press enter.

We can also test port 6380:

$ telnet 34.89.108.48 6380
Trying 34.89.108.48...
Connected to 34.89.108.48.
Escape character is '^]'.

If you were not able to connect please review your steps above.

Related articles

Mark Watney
  • 5,268
  • 2
  • 11
  • 33
  • 1
    Thank you for the great answer. It is the most detailed one I could find so far. There is still something I don't understand. What do you exactly mean when you say "Since these configmaps are centralized and may contain configurations."? When I had to guess I would say this is a hint for a more advanced scenario (in production environment) but does not play any role in your example, right? I would like to exclude any failure sources that cause my setup to fail. – gokumc Nov 16 '20 at 18:59
  • Thank you for the feedback. This comment to highlight that it's more practical to patch the configmap in place of editing or applying an edited yaml over it. – Mark Watney Nov 17 '20 at 10:16
  • Ok, I think I got it, thank you. In the meanwhile I was able to get my TCP ingress controller patch to work but UDP does not work yet. How would I change the ingress-nginx-controller service patch when I would like to allow UDP traffic? I simply tried to apply the following patch:`spec: ports: - nodePort: 31101 port: 3478 name: turn-server protocol: UDP` But it ended up with the following error message: "[...]cannot create an external load balancer with mix protocols" – gokumc Nov 17 '20 at 19:30
  • you have a copy paste error in the second patch, the filename should be nginx-ingress-svc-controller-patch.yaml – Patrick Koorevaar Jan 19 '21 at 15:02
  • @PatrickKoorevaar thank you very much for pointing that out. I edited the answer correcting it. – Mark Watney Jan 19 '21 at 15:52
  • Does a similar solution exist for [haproxy](https://github.com/haproxytech/kubernetes-ingress)? – Max Jun 11 '21 at 14:25
  • 1
    Thank you so much!! I can't give you enough point. – huggie Sep 02 '21 at 05:45
  • I was wondering, Don't we need an ingress resource? – PraveenMak Oct 28 '21 at 04:02
  • 2
    If you are using powershell, you cannot use `cat` in your patch command but should use `$(Get-Content filename.yaml -Raw)` or you get weird yaml errors. – Luke Briner Jan 20 '22 at 10:52
  • 1
    `kubectl edit deployments -n kube-system ingress-nginx-controller` was the missing step I couldn't find anywhere else. After that (additional to creating the configmap and patching the Ingress Controller Service) TCP access just worked fine. – bczoma Feb 15 '22 at 17:56
  • 1
    Thank you, you saved me a couple of hours, RAM and CPU :) – Witold Kupś Oct 04 '22 at 12:10
15

The accepted answer from Mark Watney works great. But there is no need to manually edit and patch configs, Helm can do it for you.

Download the default values.yaml file for ingress-nginx.
Change

tcp: {}
#  8080: "default/example-tcp-svc:9000"

to

tcp:
  6379: default/redis-service:6379
  6380: default/redis-service:6380

The following command will install or update(if already installed) you nginx controller, create the required config map and update config fields:

helm upgrade --install -n kube-system ingress-nginx ingress-nginx/ingress-nginx --values values.yaml --wait
Kochetov Dmitry
  • 161
  • 1
  • 4
8

@mWatney's answer is great. However, it doesn't work with UDP because you can't have a a load balancer with mixed protocols with ingress-nginx.

To get around this, you'll actually need to add a new load balancer dedicated to just UDP services, as well as another ingress controller deployment.

This is what worked for me after following all @mWatney's steps (I didn't use the kube-system namespace though, just stuck with ingress-nginx):

  1. Apply this deployment

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.10.1
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.41.2
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx-udp-controller
      namespace: ingress-nginx
    spec:
      selector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
          app.kubernetes.io/instance: ingress-nginx
          app.kubernetes.io/component: udp-controller
      revisionHistoryLimit: 10
      minReadySeconds: 0
      template:
        metadata:
          labels:
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/instance: ingress-nginx
            app.kubernetes.io/component: udp-controller
        spec:
          dnsPolicy: ClusterFirst
          containers:
            - name: udp-controller
              image: k8s.gcr.io/ingress-nginx/controller:v0.41.2@sha256:1f4f402b9c14f3ae92b11ada1dfe9893a88f0faeb0b2f4b903e2c67a0c3bf0de
              imagePullPolicy: IfNotPresent
              lifecycle:
                preStop:
                  exec:
                    command:
                      - /wait-shutdown
              args:
                - /nginx-ingress-controller
                - --publish-service=$(POD_NAMESPACE)/ingress-nginx-udp-controller
                - --election-id=ingress-controller-leader
                - --ingress-class=nginx
                - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
                - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
                - --validating-webhook=:8443
                - --validating-webhook-certificate=/usr/local/certificates/cert
                - --validating-webhook-key=/usr/local/certificates/key
              securityContext:
                capabilities:
                  drop:
                    - ALL
                  add:
                    - NET_BIND_SERVICE
                runAsUser: 101
                allowPrivilegeEscalation: true
              env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: LD_PRELOAD
                  value: /usr/local/lib/libmimalloc.so
              livenessProbe:
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                timeoutSeconds: 1
                successThreshold: 1
                failureThreshold: 5
              readinessProbe:
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                timeoutSeconds: 1
                successThreshold: 1
                failureThreshold: 3
              volumeMounts:
                - name: webhook-cert
                  mountPath: /usr/local/certificates/
                  readOnly: true
              resources:
                requests:
                  cpu: 100m
                  memory: 90Mi
          nodeSelector:
            kubernetes.io/os: linux
          serviceAccountName: ingress-nginx
          terminationGracePeriodSeconds: 300
          volumes:
            - name: webhook-cert
              secret:
                secretName: ingress-nginx-admission
 
  1. Apply this service


    apiVersion: v1
    kind: Service
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.10.1
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.41.2
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: udp-controller
      name: ingress-nginx-udp-controller
      namespace: ingress-nginx
    spec:
      type: LoadBalancer
      externalTrafficPolicy: Local
      ports:
        - name: udp
          port: 5004
          protocol: UDP
          targetPort: 5004
      selector:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: udp-controller

Running should give you something similar to kubectl get services -n ingress-nginx


NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.103.60.70     localhost     80:30885/TCP,443:30370/TCP   13m
ingress-nginx-controller-admission   ClusterIP      10.111.245.103           443/TCP                      14d
ingress-nginx-udp-controller         LoadBalancer   10.111.249.180   localhost     5004:30565/UDP               9m48s

To test if it's working, you can use netcat to hit your udp server like nc -u -v localhost 5004

Brandon
  • 81
  • 1
  • 2
  • Since 1.9.13 NGINX provides UDP Load Balancing. https://www.nginx.com/blog/announcing-udp-load-balancing/ – SergICE Mar 08 '23 at 11:50
0

In regards to "Nodeport : Want to use a port outside the 30000-32767 range."

You can manually select the port for your service, per service implementation, via the "nodePort" setting in the service's yaml file, or set the flag indicated below so your custom port-range is allocated automatically for all service implementations.

From the docs: "If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767)." services