3

I have a Kubernetes cluster that is running a Jenkins Pod with a service set up for Metallb. Currently when I try to hit the loadBalancerIP for the pod outside of my cluster I am unable to. I also have a kube-verify pod that is running on the cluster with a service that is also using Metallb. When I try to hit that pod outside of my cluster I can hit it with no problem.

When I switch the service for the Jenkins pod to be of type NodePort it works but as soon as I switch it back to be of type LoadBalancer it stops working. Both the Jenkins pod and the working kube-verify pod are running on the same node.

Cluster Details: The master node is running and is connected to my router wirelessly. On the master node I have dnsmasq setup along with iptable rules that forward the connection from the wireless port to the Ethernet port. Each of the nodes is connected together via a switch via Ethernet. Metallb is setup up in layer2 mode with an address pool that is on the same subnet as the ip address of the wireless port of the master node. The kube-proxy is set to use strictArp and ipvs mode.

Jenkins Manifest:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins-sa
  namespace: "devops-tools"
  labels:
    app: jenkins
    version: v1
    tier: backend
---
apiVersion: v1
kind: Secret
metadata:
  name: jenkins-secret
  namespace: "devops-tools"
  labels:
    app: jenkins
    version: v1
    tier: backend
type: Opaque
data:
  jenkins-admin-password: ***************
  jenkins-admin-user: ********
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: jenkins
  namespace: "devops-tools"
  labels:
    app: jenkins
    version: v1
    tier: backend
data:
  jenkins.yaml: |-
    jenkins:
      authorizationStrategy:
        loggedInUsersCanDoAnything:
          allowAnonymousRead: false
      securityRealm:
        local:
          allowsSignup: false
          enableCaptcha: false
          users:
          - id: "${jenkins-admin-username}"
            name: "Jenkins Admin"
            password: "${jenkins-admin-password}"
      disableRememberMe: false
      mode: NORMAL
      numExecutors: 0
      labelString: ""
      projectNamingStrategy: "standard"
      markupFormatter:
        plainText
      clouds:
      - kubernetes:
          containerCapStr: "10"
          defaultsProviderTemplate: "jenkins-base"
          connectTimeout: "5"
          readTimeout: 15
          jenkinsUrl: "jenkins-ui:8080"
          jenkinsTunnel: "jenkins-discover:50000"
          maxRequestsPerHostStr: "32"
          name: "kubernetes"
          serverUrl: "https://kubernetes"
          podLabels:
          - key: "jenkins/jenkins-agent"
            value: "true"
          templates:
            - name: "default"
          #id: eeb122dab57104444f5bf23ca29f3550fbc187b9d7a51036ea513e2a99fecf0f
              containers:
              - name: "jnlp"
                alwaysPullImage: false
                args: "^${computer.jnlpmac} ^${computer.name}"
                command: ""
                envVars:
                - envVar:
                    key: "JENKINS_URL"
                    value: "jenkins-ui:8080"
                image: "jenkins/inbound-agent:4.11-1"
                ttyEnabled: false
                workingDir: "/home/jenkins/agent"
              idleMinutes: 0
              instanceCap: 2147483647
              label: "jenkins-agent"
              nodeUsageMode: "NORMAL"
              podRetention: Never
              showRawYaml: true
              serviceAccount: "jenkins-sa"
              slaveConnectTimeoutStr: "100"
              yamlMergeStrategy: override
      crumbIssuer:
        standard:
          excludeClientIPFromCrumb: true
    security:
      apiToken:
        creationOfLegacyTokenEnabled: false
        tokenGenerationOnCreationEnabled: false
        usageStatisticsEnabled: true
    unclassified:
      location:
        adminAddress:
        url: jenkins-ui:8080
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-pv-volume
  labels:
    type: local
spec:
  storageClassName: local-storage
  claimRef:
    name: jenkins-pv-claim
    namespace: devops-tools
  capacity:
    storage: 16Gi
  accessModes:
    - ReadWriteMany
  local:
    path: /mnt
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - heine-cluster1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-pv-claim
  namespace: devops-tools
  labels:
    app: jenkins
    version: v1
    tier: backend
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 8Gi
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: jenkins-cr
rules:
- apiGroups: [""]
  resources: ["*"]
  verbs: ["*"]
---
# This role is used to allow Jenkins scheduling of agents via Kubernetes plugin. 
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: jenkins-role-schedule-agents
  namespace: devops-tools
  labels:
    app: jenkins
    version: v1
    tier: backend
rules:
- apiGroups: [""]
  resources: ["pods", "pods/exec", "pods/log", "persistentvolumeclaims", "events"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["pods", "pods/exec", "persistentvolumeclaims"]
  verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# The sidecar container which is responsible for reloading configuration changes
# needs permissions to watch ConfigMaps
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: jenkins-casc-reload
  namespace: devops-tools
  labels:
    app: jenkins
    version: v1
    tier: backend
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: jenkins-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jenkins-cr
subjects:
- kind: ServiceAccount
  name: jenkins-sa
  namespace: "devops-tools"
---
# We bind the role to the Jenkins service account. The role binding is created in the namespace
# where the agents are supposed to run.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: jenkins-schedule-agents
  namespace: "devops-tools"
  labels:
    app: jenkins
    version: v1
    tier: backend
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins-role-schedule-agents
subjects:
- kind: ServiceAccount
  name: jenkins-sa
  namespace: "devops-tools"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: jenkins-watch-configmaps
  namespace: "devops-tools"
  labels:
    app: jenkins
    version: v1
    tier: backend
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins-casc-reload
subjects:
- kind: ServiceAccount
  name: jenkins-sa
  namespace: "devops-tools"
---
apiVersion: v1
kind: Service
metadata:
  name: jenkins
  namespace: "devops-tools"
  labels:
    app: jenkins
    version: v1
    tier: backend
  annotations:
    metallb.universe.tf/address-pool: default
spec:
  type: LoadBalancer
  loadBalancerIP: 172.16.1.5
  ports:
  - name: ui
    port: 8080
    targetPort: 8080
  externalTrafficPolicy: Local
  selector:
    app: jenkins
---
apiVersion: v1
kind: Service
metadata: 
  name: jenkins-agent
  namespace: "devops-tools"
  labels:
    app: jenkins
    version: v1
    tier: backend
spec:
  ports:
  - name: agents
    port: 50000
    targetPort: 50000
  selector:
    app: jenkins 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins
  namespace: "devops-tools"
  labels:
    app: jenkins
    version: v1
    tier: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
        version: v1
        tier: backend
      annotations:
        checksum/config: c0daf24e0ec4e4cb59c8a66305181a17249770b37283ca8948e189a58e29a4a5
    spec:
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
        runAsNonRoot: true
      containers:
        - name: jenkins
          image: "heineza/jenkins-master:2.323-jdk11-1"
          imagePullPolicy: Always
          args: [ "--httpPort=8080"]
          env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: JAVA_OPTS
            value: -Djenkins.install.runSetupWizard=false -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Chicago
          - name: JENKINS_SLAVE_AGENT_PORT
            value: "50000"
          ports:
          - containerPort: 8080
            name: ui
          - containerPort: 50000
            name: agents
          resources:
            limits:
              cpu: 2000m
              memory: 4096Mi
            requests:
              cpu: 50m
              memory: 256Mi
          volumeMounts:
          - mountPath: /var/jenkins_home
            name: jenkins-home
            readOnly: false
          - name: jenkins-config
            mountPath: /var/jenkins_home/jenkins.yaml
          - name: admin-secret
            mountPath: /run/secrets/jenkins-admin-username
            subPath: jenkins-admin-user
            readOnly: true
          - name: admin-secret
            mountPath: /run/secrets/jenkins-admin-password
            subPath: jenkins-admin-password
            readOnly: true
      serviceAccountName: "jenkins-sa"
      volumes:
        - name: jenkins-cache
          emptyDir: {}
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: jenkins-pv-claim
        - name: jenkins-config
          configMap: 
            name: jenkins
        - name: admin-secret
          secret:
            secretName: jenkins-secret

This Jenkins manifest is a modified version of what the Jenkins helm-chart generates. I redacted my secret but in the actual manifest there are base64 encoded strings. Also, the docker image I created and use in the deployment uses the Jenkins 2.323-jdk11 image as a base image and I just installed some plugins for Configuration as Code, kubernetes, and Git. What could be preventing the Jenkins pod from being accessible outside of my cluster when using Metallb?

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
heineza
  • 31
  • 3
  • If you `traceroute ip-adress-of-lb-service` what is the output? Do you reach your Kubernetes cluster or not? That could help reducing the possible problems. – AndD Dec 18 '21 at 07:29
  • So when I traceroute the kube-verify pod that is working from my laptop that is outside my cluster I get the ip for that pod. When I try to traceroute the jenkins pod, I just see my laptop listed in the output of traceroute and that's it. – heineza Dec 18 '21 at 18:45
  • Could you attach more details? Which version of Kubernetes did you use? Did you use bare metal installation or some cloud provider? Could you attach some yaml files? – kkopczak Dec 20 '21 at 20:48
  • @kkopczak I am running a version 1.22.3 bare metal Kubernetes cluster. I added my jenkins manifest, are there any other yaml files that you think would be helpful? – heineza Dec 20 '21 at 22:15

1 Answers1

1

MetalLB doesn't allow by default to re-use/share the same LoadBalancerIP addresscase.

According to MetalLB documentation:

MetalLB respects the spec.loadBalancerIP parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter.

If MetalLB does not own the requested address, or if the address is already in use by another service, assignment will fail and MetalLB will log a warning event visible in kubectl describe service <service name>.[1]

In case you need to have services on a single IP you can enable selective IP sharing. To do so you have to add the metallb.universe.tf/allow-shared-ip annotation to services.

The value of the annotation is a “sharing key.” Services can share an IP address under the following conditions:

  • They both have the same sharing key.
  • They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).
  • They both use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical). [2]

UPDATE

I tested your setup successfully with one minor difference - I needed to remove: externalTrafficPolicy: Local from Jenkins Service spec.

Try this solution, if it still doesn't work then it's a problem with your cluster environment.

kkopczak
  • 742
  • 2
  • 8
  • I have done that and have had that set up before and it still wasn't working. The first time I noticed it worked for a second is when I tried flushing NAT ip tables rules on the master node. However, I have compared what I have in my ip tables versus what a friend of mine has in his ip tables for his cluster and they look basically identical. – heineza Jan 02 '22 at 02:15
  • Also when I have done kubectl describe service when it has been setup like you suggested I see a statement that says announcing ip which to me would suggest it should be working but alas I see that annoying browser message Unable to connect. And I know that the ip I have assigned it when I give it a distinct ip is a free ip in the range I defined in the metallb config configmap in the address pool. I usually use one of four distinct ips in the pool and at the time before I assign them they are free and available to be assigned. – heineza Jan 03 '22 at 00:46
  • Sorry for late response, I have updated the answer. – kkopczak Jan 24 '22 at 17:18
  • @kkopczak Confirm that `externalTrafficPolicy: Cluster` helps. Thank you! – Oleg Neumyvakin Apr 28 '22 at 12:58