7

This config works in other clusters but not in the last one that I have deployed. There is some kind of problem with my RBAC configuration.

kubectl get pods -n ingress-controller

NAME                                     READY   STATUS             RESTARTS   AGE
haproxy-ingress-b4d969b8b-dw65k          0/1     CrashLoopBackOff   15         52m
ingress-default-backend-f5dfbf97-6t72p   1/1     Running            0          52m

kubectl logs -n ingress-controller -l run=haproxy-ingress

I0120 11:55:17.347244       6 launch.go:151] 
Name:       HAProxy
Release:    v0.8
Build:      git-1351a73
Repository: https://github.com/jcmoraisjr/haproxy-ingress
I0120 11:55:17.347337       6 launch.go:154] Watching for ingress class: haproxy
I0120 11:55:17.347664       6 launch.go:364] Creating API client for https://10.3.0.1:443
I0120 11:55:17.391439       6 launch.go:376] Running in Kubernetes Cluster version v1.16 (v1.16.4) - git (clean) commit 224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba - platform linux/amd64
F0120 11:55:17.401773       6 launch.go:177] no service with name ingress-controller/ingress-default-backend found: services "ingress-default-backend" is forbidden: User "system:serviceaccount:ingress-controller:ingress-controller" cannot get resource "services" in API group "" in the namespace "ingress-controller": RBAC: clusterrole.rbac.authorization.k8s.io "ingress-controller" not found

kubectl get svc -n ingress-controller

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
ingress-default-backend   ClusterIP   10.3.118.160   <none>        8080/TCP   55m

kubectl describe clusterrole ingress-controller

Name:         ingress-controller
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRole","metadata":{"annotations":{},"name":"ingress-controller"},"rules":[...
PolicyRule:
  Resources                    Non-Resource URLs  Resource Names  Verbs
  ---------                    -----------------  --------------  -----
  events                       []                 []              [create patch]
  services                     []                 []              [get list watch]
  ingresses.extensions         []                 []              [get list watch]
  nodes                        []                 []              [list watch get]
  configmaps                   []                 []              [list watch]
  endpoints                    []                 []              [list watch]
  pods                         []                 []              [list watch]
  secrets                      []                 []              [list watch]
  ingresses.extensions/status  []                 []              [update]

kubectl describe clusterrolebinding -n ingress-controller ingress-controller

Name:         ingress-controller
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"ingress-controller"},"r...
Role:
  Kind:  ClusterRole
  Name:  ingress-controller
Subjects:
  Kind            Name                Namespace
  ----            ----                ---------
  ServiceAccount  ingress-controller  ingress-controller
  User            ingress-controller  

kubectl auth can-i get services --as=ingress-controller

no - RBAC: clusterrole.rbac.authorization.k8s.io "ingress-controller" not found

Any help will be appreciated.

UPDATE:

Add deployment and rbac for ingress-controller:

https://github.com/jcmoraisjr/haproxy-ingress/blob/master/examples/deployment/haproxy-ingress.yaml

https://github.com/jcmoraisjr/haproxy-ingress/blob/master/examples/rbac/ingress-controller-rbac.yml

TlmaK0
  • 3,578
  • 2
  • 31
  • 51
  • 1
    Your cluster is On-Prem or local? How did you deploy this ingress controller, any tutorials or helm? I've tried to reproduce it but I didnt have this issue. – PjoterS Jan 22 '20 at 10:23
  • Is in ovh kubernetes cluster. Two other clusters I have created before works without issues. I can't reproduce it on my development environment. I have created it following the examples here https://github.com/jcmoraisjr/haproxy-ingress/tree/master/examples – TlmaK0 Jan 22 '20 at 11:31
  • can you share the ingress controller deployment yaml and describe the ingress controller pod – Arghya Sadhu Jan 23 '20 at 04:04
  • Check your api-server pod logs in the kube-system namespace for errors with the creation, as this seems to be a cluster issue if the YAMLs work on other clusters. – char Jan 23 '20 at 08:25
  • 3
    can you post output of `kubectl auth can-i get services --as=system:serviceaccount:ingress-controller:ingress-controller` – switchboard.op Jan 23 '20 at 20:58
  • 1
    I never used OVH, but: 1. You are using K8s 1.16 and apiVersion you are using is `rbac.authorization.k8s.io/v1beta1`. Since 1.16 it should be `apiVersion: rbac.authorization.k8s.io/v1`. 2. In your ClusteRoleBinding, ingress-controller under subjest you used `-` next to apiGroup, it should be next to `kind`. 3. Based on docs https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-binding-examples shouldnt you use something like system service account: name: system:serviceaccount:ingress-controller as you received issue from system service account? – PjoterS Jan 24 '20 at 08:56
  • Thanks for your help. I have created an issue to Ovh for other problem related to nodes, and magically all starts to work :). Unfortunately they don't give me the solution. – TlmaK0 Jan 24 '20 at 16:37
  • @switchboard.op I had the same problem and your command returned `yes`. However I can't specify the full name of the service account in the yaml, as I get an "Invalid value" error – peetasan Jul 07 '20 at 08:48

1 Answers1

0

the ClusterRoleBinding is bound to the service account ingress-controller and it works with the daemonset exemple because it uses serviceAccountName: ingress-controller

the deployment does not define serviceAccountName, so it uses default serviceaccount (and not ingress-controller.

so you can fix the clusterrolebinding by binding to default like this :


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-controller
subjects:
  - kind: ServiceAccount
    name: default
    namespace: ingress-controller
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: ingress-controller
Cyril Jouve
  • 990
  • 5
  • 15