9

Having set the default gce ingress controller working with ingresses resources set to respond to hostnames

The advantage of having a static ip (in my very current point of view) is that you never wonder where to configure your domain to, it will always remain the same ip; and on the other side you can stick as much service as you want behind it

I'm quite new using this gce loadbalancer, can I rely on it as I would with a static ip (meaning it'll never change) ? Or is there a layer to add to point a static ip to a loadbalancer ?

I'm asking because you can set the ip of a service resource. But I have no clue yet about doing the same with this lbc/ingress combo — assigning a static ip to an ingress ?

I've checked around, there seem to exist some 'forwarding' (static ip to load balancer)… but I'd really appreciate some experienced help on this one, at least to end up understanding it all clearly

Best

Ben
  • 5,030
  • 6
  • 53
  • 94
  • Same for nginx ingress controller, the fact is that if you delete the ingress controller RC the scheduler can deploy the new pod onto a new node and thus having a different static IP configured (on the new node). This also means that if the node falls down, the pod will be rescheduled and perform a failover but the static IP will also be changed so a fixed DNS record will fail to resolve. No way we have found until the moment to assign a fixed IP into the controller that implements the Ingress resource. – danius Oct 20 '16 at 19:18
  • Hello Dani; thx a lot for your comment. So you're basically saying "no static ip for an ingress", right ? At least for the moment – Ben Oct 20 '16 at 19:43
  • The most I found was this: https://beroux.com/english/articles/kubernetes/?part=3 – danius Oct 20 '16 at 21:05
  • That said, I tried it and it's working although not very stable, I'm still investigating, until now by deploying the ingress and just "bypassing" kube-proxy, no Service, directly getting IP from the node it works like a charm, but no way to fix the IP, this way described in the link it fixes the IP but no stability up to now... getting ERR_TIMED_OUT errors randomly, still searching for causes. – danius Oct 20 '16 at 21:10

1 Answers1

9

Finally I have a working solution. You gotta add an L4 Service using loadBalancerIP: x.x.x.x where you put a previously reserved static IP, and then put a selector that the deployment/RC already has, like this:

UPDATE [Nov-2017]: Static IP should be regional and in the same region as cluster

Service:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress-svc
spec:
  type: LoadBalancer
  loadBalancerIP: 104.155.55.37  # static IP pre-allocated.
  ports:
    - port: 80
      name: http
    - port: 443
      name: https
  selector:
    k8s-app: nginx-ingress-lb

Controller:

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-ingress-rc
  labels:
    k8s-app: nginx-ingress-lb
spec:
  replicas: 1
  selector:
    k8s-app: nginx-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-lb
    spec:
      containers:
      - image: eu.gcr.io/infantium-platform-20/nginx-ingress
        imagePullPolicy: Always
        name: nginx-ingress
        ports:
        - containerPort: 80
          hostPort: 80
        - containerPort: 443
          hostPort: 443
        args:
        - -nginx-configmaps=staging/nginx-staging-config

Solution hint was sourced from this example: https://beroux.com/english/articles/kubernetes/?part=3

Hope this helps.

danius
  • 2,664
  • 27
  • 33
  • Ok thanks a lot lot ! I will investigate… But umm, I'm still learning sorry to ask this. With the example you show here, you don't have an ingress right ? the cluster entry point is your nginx service ? – Ben Oct 20 '16 at 21:39
  • No, you have to have the ingress a side, so you create first the ingress, then configmap (if needed) and then the service and the controller, in this order. The ingress and configmap don't have anything to do with the IP thing, as long as ingress cannot define an static IP. – danius Oct 20 '16 at 21:41
  • In other words the controller looks for ingress resources in its cluster+namespace, no need to bind it to controller. Controller is an implementation in Golang that uses K8s api's to discover everything, yes it looks like black magic at first, but you can go and check the source code to see how it works (I did and indeed I have a version that enables more customization (it adds number of worker-processes, conns per worker, etc.: https://github.com/infantium/kubernetes-ingress – danius Oct 20 '16 at 21:46
  • ahah! big thanks I start to get the whole picture. Have to say Controller is the part I did not looked into (you can couple Service with Deployments)… but Controller is now named Deployments in v1beta1 ? – Ben Oct 20 '16 at 21:57
  • anyway I'll try it and follow up here when I'll make it / thx a lot lot – Ben Oct 20 '16 at 21:58
  • Deployment is meant to be Controller replacement at some point, yes! Right now is almost equivalent – danius Oct 20 '16 at 22:12
  • ok so let's sum up, with your example, you still have an nginx in the process, but you don"t have to configure it to respond to a domain. The Ingress contains the domain(s) configuration – Ben Oct 20 '16 at 22:16
  • Yeah, and the configuration is generated with the go program (the actual `ingress controller`) pulling config from k8s API (Ingress and ConfigMap). You can see even the templates (and change them in case you need to) here: https://github.com/infantium/kubernetes-ingress/blob/master/nginx-controller/nginx/ingress.tmpl and https://github.com/infantium/kubernetes-ingress/blob/master/nginx-controller/nginx/nginx.conf.tmpl. It's pretty straightforward, I don't even code in Go and take me 20 minutes to extend behavior to accept more parameters and tuning things. – danius Oct 20 '16 at 22:35
  • @Ben did you have any luck? if you did mark the answer as the solution :) if not ask me for more details, I validated this with a GCloud Google Consultant in the Gold Support my company has and it's the way to go at a single cluster level – danius Oct 23 '16 at 23:32
  • Hello @danigosa i sure will ! i'll move soon some projects on the setup i've built on gce; i got the idea thanks to your support. Be right back as soon as i'll have time to, and will mark it i guess positively. Thx again – Ben Oct 24 '16 at 00:14
  • Humm, I see here that you proxypass : https://github.com/infantium/kubernetes-ingress/blob/master/nginx-controller/nginx/ingress.tmpl#L47 Can we say your nginx image creates nginx vhosts based on the 'monitoring' it does over ingresses ? Sorry to ask maybe obvious things, but without a complete understanding I might not set it all correctly – Ben Oct 24 '16 at 11:05
  • Yes, it is exactly that, for more information take a look here for a complete example of ingress, controller, backend service and backend controller: https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example . In fact, it creates vhost depending on if you define the backend rules of hosts, for more detail see an ingress example: https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/complete-example/cafe-ingress.yaml – danius Oct 24 '16 at 20:30
  • I only modified the fact of adding 3 parameters more in the configmap, but the rest of the implementation is just a fork of nginxinc/kubernetes-ingress: https://github.com/nginxinc/kubernetes-ingress . Hope this helps! – danius Oct 24 '16 at 20:31
  • hmm… dammit. I know it takes some time to get rid of the 502 error (before the lb goes up), but the tea/coffee never made it (ran it for 3 hours, just checked, still 502). that bothers me to to be able to run it as advertised :/. one thing weird in their example, they instruct you to get the ip via `kubectl get node {nodename} -o json | grep -A 2 ExternalIP`, but it seems wrong (the one returned never respond) I get the ip via `kubectl get ing`. Is this an error in their readme ? talking about this one https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example – Ben Oct 25 '16 at 06:36
  • Allright got it in the end + made it work with kube lego as I did (easierly) with gce lb. Added value, possibility to have a static ip. Guess this example is fine too : https://github.com/jetstack/kube-lego/tree/master/examples/nginx/nginx Thing is, anything tried, could no have my health check running on my load balancer, keep telling me that "This load balancer has no health check, so traffic will be sent to all instances regardless of their status." – Ben Oct 25 '16 at 10:52
  • Yeah deployment is the new "ReplicationController" and in fact most people keep saying "controller" to deployments because are used to :) It's funny because I was trying to solve this with the GCloud consultants and it was not clear even for them, as the docs and samples totally lack a section for explaining Ingress controllers properly, the documentation in github is so confusing that when you look at the solution you think "come on guys this is ridiculous!" – danius Oct 25 '16 at 19:16
  • well in the end i have to say that, if you run on google and don't care about static ip's, getting things up with the default gce lb (what i first used) is a shortcut to consider. Speaking of ux, it saves you from setting the nginx lb service/health check/default backend – Ben Oct 25 '16 at 23:36
  • Yeah we extensively use gce lb or the L4 Service.type=LoadBalancer, but we had to support Websockets, TLS and Http2 and we required to have a full control on the LB server, and this is only possible through Nginx or HAProxy controllers... of course if you do not need advanced LB features like Websockets, Http2, advanced algorithms or configurations, you gotta go for default GKE or GCE PaaS options :) – danius Oct 26 '16 at 19:31
  • Well thx to your help i guess i'll stick with it for future needs; i got this healthcheck thing so well it now looks clean! – Ben Oct 26 '16 at 20:21
  • I cannot download `eu.gcr.io/infantium-platform-20/nginx-ingress` – silgon Apr 03 '18 at 14:32