12

I can't access to Network IP assigned by MetalLB load Balancer

I created a Kubernetes cluster in k3s. Its 1 master and 1 workers. Each one has its own Private IP.

Master 192.168.0.13

Worker 192.168.0.13

I Installed k3s with INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik"

Now I am trying to deploy a app using MetalLB and nginx ingress

  --set configInline.address-pools[0].name=default \
  --set configInline.address-pools[0].protocol=layer2 \
  --set configInline.address-pools[0].addresses[0]=192.168.0.21-192.168.0.30
helm install nginx-ingress stable/nginx-ingress --namespace kube-system \
    --set controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller\
    --set controller.image.tag=0.30.0 \
    --set controller.image.runAsUser=33 \
    --set defaultBackend.enabled=false

I Can see every pod up and running

NAME                                             READY   STATUS    RESTARTS   AGE    IP             NODE             NOMINATED NODE   READINESS GATES
coredns-d798c9dd-lsdnp                           1/1     Running   5          37h    10.42.0.25     c271-k3s-ocrh    <none>           <none>
local-path-provisioner-58fb86bdfd-bcpl7          1/1     Running   5          37h    10.42.0.22     c271-k3s-ocrh    <none>           <none>
metrics-server-6d684c7b5-v9tmh                   1/1     Running   5          37h    10.42.0.24     c271-k3s-ocrh    <none>           <none>
metallb-speaker-4kbmw                            1/1     Running   0          4m7s   192.168.0.14   c271-k3s-agent   <none>           <none>
metallb-controller-75bf779d4f-nb47l              1/1     Running   0          4m7s   10.42.1.45     c271-k3s-agent   <none>           <none>
metallb-speaker-776p9                            1/1     Running   0          4m7s   192.168.0.13   c271-k3s-ocrh    <none>           <none>
nginx-ingress-default-backend-5b967cf596-554bq   1/1     Running   0          98s    10.42.1.46     c271-k3s-agent   <none>           <none>
nginx-ingress-controller-674675d5b6-blndp        1/1     Running   0          98s    10.42.1.47     c271-k3s-agent   <none>           <none>

App getting IP 192.168.0.21

❯ kubectl get services  -n kube-system -l app=nginx-ingress -o wide
NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE    SELECTOR
nginx-ingress-default-backend   ClusterIP      10.43.170.195   <none>         80/TCP                       112s   app=nginx-ingress,component=default-backend,release=nginx-ingress
nginx-ingress-controller        LoadBalancer   10.43.220.166   192.168.0.21   80:31735/TCP,443:31566/TCP   111s   app=nginx-ingress,component=controller,release=nginx-ingress

I Can access the app from master and worker by curl to nginx controller pod

HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Sat, 21 Mar 2020 10:43:34 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive

But the IP is not accessible from local 192.168.0.21

Diagnosis : DHCP is on, and 192.168.0.21-192.168.0.30 is absolutely free., When i try to allocate 192.168.0.21 to master or agent by netplan config they get the IP.

Please Guide me, What i am missing.

Rahul Sharma
  • 779
  • 2
  • 12
  • 27

3 Answers3

6

You need to make sure that the source IP address (external-ip assigned by metallb) is preserved. To achieve this, set the value of the externalTrafficPolicy field of the ingress-controller Service spec to Local. For example

apiVersion: v1
kind: Service
metadata:
  name: my-app
  labels:
    helm.sh/chart: webapp-0.1.0
    app.kubernetes.io/name: webapp
    app.kubernetes.io/instance: my-app
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: webapp
    app.kubernetes.io/instance: my-app
  externalTrafficPolicy: Local

The default value for externalTrafficPolicy field is 'Cluster'. So change it to Local

  • 1
    is invalid: spec.externalTrafficPolicy: Invalid value: "Local": may only be set when `type` is 'NodePort' or 'LoadBalancer' – VocoJax Nov 18 '22 at 02:16
0

In my setup with Cilium and HAProxy ingress controller I'd to change externalTrafficPolicy from Local to Cluster

kubectl --namespace ingress-controller patch svc haproxy-ingress \
 -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'
Oleg Neumyvakin
  • 9,706
  • 3
  • 58
  • 62
0

from two years I've been using metalLb in my home-lab, and I didn't get the error (although I got other errors for example ., MetalLB fails to assign an IP address from the pool)

I want to share my current setup with folks who are still struggling on the internet.

helm install --create-namespace metallb metallb/metallb -n metallb-system -f values.yaml

configInline:
  address-pools:
    - name: default
      protocol: layer2
      addresses:
        - 192.168.0.21/30
      # can use series like 192.168.0.21-24 too.

Debugging - try to get logs from all the pod in namespace metallb.

kail -n metallb

K8S installed with calico using https://github.com/geerlingguy/ansible-role-kubernetes

Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:17:11Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}

Maybe switching externalTrafficPolicy to local/cluster may help however, I didn't try. my setup works out of the box.

Good Luck.

Rahul Sharma
  • 779
  • 2
  • 12
  • 27
  • 1
    In the latest helm charts `configInline` field is removed, now the address pool is configured by CRDs. consider updating the answer. – P.... Oct 14 '22 at 16:25