4

I have a bare metal kube cluster (PI Cluster). It's got a simple hello world web page split up across the nodes and it's working fine. I've since created a service to get it to exposed on the public side of the things but the site won't render. It seems that I'm not getting announcement to publish.

My config.map is pretty simple.

metadata:
  name: metallb
  namespace: metallb-system
  selfLink: /api/v1/namespaces/metallb-system/configmaps/metallb
  uid: 89d1e418-989a-4869-9da1-244409f8f700
  resourceVersion: '1086283'
  creationTimestamp: '2020-06-09T00:34:07Z'
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"v1","data":{"config":"address-pools:\n- name: default\n 
      protocol: layer2\n  addresses:\n  -
      192.168.1.16-192.168.1.31\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"metallb","namespace":"metallb-system"}}
  managedFields:
    - manager: kubectl
      operation: Update
      apiVersion: v1
      time: '2020-06-09T00:34:07Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:data':
          .: {}
          'f:config': {}
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.16-192.168.1.31
kind: ConfigMap

and my service looks fine:

testapp-service   LoadBalancer   10.107.213.151   192.168.1.16   8081:30470/TCP   7m40s

From on the master node, I can curl 192.168.1.16:8081 and get the data back that I'd expect. However, if I go to any other machine on the 192.168.1.0 network, I can't get it to render at all.

I know the public addresses aren't overlapping. I have the 192.168.1.16-192.168.1.31 range blocked out from my DHCP server, so there's nothing in that range.

So what does it take to get my master-001 node to announce that it is handling traffic for 192.168.1.16? (It has it's only address at .250 and that one does announce, but that one isn't the service, etc).

I'm using Ubuntu 20 on Raspberry PI 4s. The 192 address is the wifi side of things, the 10. is the wired side of things.

Thanks, Nick

Nick Jacobs
  • 579
  • 2
  • 9
  • 29

3 Answers3

4

In Layer2 mode, the address range you give to metallb and the node IPs must be in same subnet. What are the IP addresses of the nodes?

The packet destined to service IP (192.168.1.16) must first reach layer 2 domain of the cluster nodes so that the packet can be routed to the node handling the service IP which means the node IPs also must be in 192.168.1.0 network.

If only master node is connected to public network, try adding a nodeAffinity on the speaker daemonset so that speaker pods are created only on those nodes.

Shashank V
  • 10,007
  • 2
  • 25
  • 41
  • They are on the 10 net. Master-001: WLan (Public) 192.168.1.250, Eth0 (private) 10.0.0.1, Node-001 Eth Only: 10.0.0.2, Node-002 (etc only) 10.0.0.3. Isn't the point that I can have a public 192.168.1.x address exposed to other computers on the 192.168.1.x network? So it becomes my public address routing to a service inside of the private network? If I use NodePort, this all works fine. But I'm trying to not use NodePort, but more LoadBalance. – Nick Jacobs Jun 19 '20 at 00:57
  • If you want to use `192.168.1.x` network, then all your nodes must also have an interface connecting to the network. According to your description only `Master-001` is connected to `192.168.1.x` right? – Shashank V Jun 19 '20 at 09:04
  • You can do on thing. You can try setting nodeAffinity on the speaker daemonset so that speaker pods start only on those nodes that are connected to `192.168.1.x` network. – Shashank V Jun 19 '20 at 09:37
  • Would that be to just add nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - master-001 – Nick Jacobs Jun 20 '20 at 21:30
  • I reworked my cluster to look like this: Master-001: wlan: 192.168.1.250 eth0: 10.0.0.1 Worker-001: wlan: 192.168.1.11 eth0: 10.0.0.2 Worker-002: wlan: 192.168.1.12 eth0: 10.0.0.3 Once in a while, I can actually use the service at 192.168.1.17 from my Mac on the 192.168 network. But it seems to lose that path after a short duration. (I'm just hitting NGINX's landing page to test with) It's almost as if it has a short duration and them doesn't renew the TTL. Does this seem right? (I'll be trying the NodeAffinity shortly) – Nick Jacobs Jun 21 '20 at 15:16
  • Yes that's looks good from the networking point of view. I have never tried metallb over wireless. I just found this issue. https://github.com/raspberrypi/linux/issues/2677. Check if it is relevant for your setup. If all your nodes are connected to the public network, you don't need nodeAffinity anymore. – Shashank V Jun 21 '20 at 16:16
  • Not sure if it was putting the wireless in promiscuous mode or not, made a lot of changes but this all did get me on the right track. – Nick Jacobs Jun 23 '20 at 23:41
0

Since MetalLB v0.13.0, configuration is via CRDs, rather than ConfigMap, so you'll use an IPAddressPool object.

If you want to use Layer 2 mode, remember to add an L2Advertisement object, otherwise MetalLB won't respond to ARP requests.

In my haste, I completely skipped past that part of the documentation and wasted 30 minutes attempting to debug the problem.

Roger Lipscombe
  • 89,048
  • 55
  • 235
  • 380
0

I had the same problem with my Bare Metal cluster. Like Roger Lipscombe's answer above, I forgot to include the L2Advertisement, but not only that, I had to patch to make strictarp:false

This is how I Installed MetalLB (load balancer for bare-metal)

See https://metallb.universe.tf/installation/ and be sure to configure as per https://metallb.universe.tf/configuration/

In my lab, I patched kubeproxy config and change strictarp: to false

kubectl edit configmap -n kube-system kube-proxy

apiVersion: kubeproxy.config.k8s.io/v1alpha1
        kind: KubeProxyConfiguration
        mode: ""
        ipvs:
        strictARP: false

• Then run the following • kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

• I have the following scripts to run (copied and edited from examples on the configuration page. In my case, I created the file at ~/k8s/MetalLB to configure MetalLB

kubectl apply -f ip-addresspool.yaml (be sure to Edit the address pool for the IP addresses you wish to assign to your kubernetes services)

kubectl apply -f L2advertisement.yaml

And Voila, we have a working bare metal Kubernetes cluster with a LoadBalancer

Surfingjoe
  • 11
  • 2