4

I am facing a problem with my current k8s setup. In production, I spin up three replicas of each of our services and put them in a pod. When the pods speak to each other, we would like the pods to speak to each container in the pod in a round-robin fashion. Unfortunately, the connection between pods is never terminated thanks to TLS keep alive - and we don't want to change that part specifically - but we do want to have each container in a pod communicate properly. This is sort of what we have now:

How Services Talk

If the API is trying to talk to, say, pod OSS, it will talk to the first container only. I want API to be able to talk to all three in a round-robin fashion.

How do I do this? I understand that I will need an Ingress Controller, like nginx. But is there some real tutorial that breaks down how I can achieve this? I am unsure and somewhat new to k8s. Any help would be appeciated!

By the way, I am working locally on minikube.

Edit:

In production, we spin up three replicas of each service. When service A needs to speak to service B, a pod B1 from service B is selected and manages whatever request it receives. However, that pod B1 becomes the only pod from service B that handles any communication; in other words, pods B2 and B3 are never spoken to. I am trying to solve this problem with nginx because it seems like we need a load balancer to help with this, but I'm not sure how to do it. Can anyone provide some detailed explanation on what needs to be done? Specifically, how can I set up nginx with my services so that all pods are used in a service (in some round-robin fashion), unlike what is happening now where only one pod is used? This is a problem because in production, the one pod gets overloaded with requests and dies when we have two other pods sitting there doing nothing. I'm developing locally on minikube.

John Lexus
  • 3,576
  • 3
  • 15
  • 33
  • 1
    Why do you put all containers in one pod? to achieve your goal create a deployment with multiple replicas (each container would have its own pod) and create a service that would point to those pods. You will use service name to access all pods and kubernetes will decide which one exactly. – Pavel Agarkov Aug 28 '18 at 20:18
  • Agreed with comment above. You've architected this incorrectly. Create a Service for each service, have 3 pods backing the service, and let Kubernetes schedule them. Anything connecting to the service will be round-robined between pods. Then, once you create an ingress controller for your API and point it at the services, you will get the behavior that you want. – Marcin Romaszewicz Aug 28 '18 at 21:01
  • @MarcinRomaszewicz Okay, I am actually wrong - "Create a Service for each service, have 3 pods backing the service, and let Kubernetes schedule them". I have done this. They are not containers in pods - they are each their own pod. Can you describe more how I would create an ingress controller? I havent found a really good tutorial that explains how to do one correctly with my services – John Lexus Aug 28 '18 at 21:05

2 Answers2

2

I'm assuming that you have a microservice architecture underneath your pods, right? Have you considered the use of Istio with Kubernetes? It's open sourced and developed by Google, IBM and Lyft -- intention is to give developers a vendor-neutral way (which seems to be what you are looking for) to connect, secure, manage, and monitor networks of different microservices on cloud platforms (AWS, Azure, Google, etc).

At a high level, Istio helps reduce the complexity of these deployments, and eases the strain on your development teams. It is a completely open source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

This is the link to Istio's documentation, explaining how to set up a multi cluster environment in details, which is what you are looking for.

There's a note in the documentation that I would like to highlight -- it may be related to your issue:

Since Kubernetes pods don’t have stable IPs, restart of any Istio service pod in the control plane cluster will cause its endpoint to be changed. Therefore, any connection made from remote clusters to that endpoint will be broken. This is documented in Istio issue #4822.

There are a number of ways to either avoid or resolve this scenario. This section provides a high level overview of these options.

  • Update the DNS entries
  • Use a load balancer service type
  • Expose the Istio services via a gateway

I'm quoting the load balancer solution, since it seems to be what you want:

In Kubernetes, you can declare a service with a service type to be LoadBalancer. A simple solution to the pod restart issue is to use load balancers for the Istio services. You can then use the load balancer IPs as the Istio services’s endpoint IPs to configure the remote clusters.

I hope it helps, and if you have any question, shoot!

Fabio Manzano
  • 2,847
  • 1
  • 11
  • 23
  • I like the idea of istio. Are you familiar with it? I tried setting it up but could not for the life of me set up service entries and gateways as needed. I still need to access traffic outside of my cluster and istio essentially wouldn't let me, no matter what authentication I turned off. – John Lexus Sep 06 '18 at 05:42
  • Sure. I see a lot of companies combining Istio with an API Gateway for external access (check these references [1](https://abhishek-tiwari.com/a-sidecar-for-your-service-mesh/) and [2](https://medium.com/microservices-in-practice/service-mesh-vs-api-gateway-a6d814b9bf56)). Since you mentioned nginx in your question, I would recommend the use of [Kong](https://konghq.com/kong-community-edition/), which is an API Gateway implementation on top of nginx. – Fabio Manzano Sep 06 '18 at 12:47
  • Continuing... As depicted in [this picture](https://abhishek-tiwari.com/assets/images/Service-Mesh-API-Gateway.png), you should call your LoadBalancer from the API Gateway. Also, there's a popular Stack Overflow thread talking about k8s Ingress and Load Balancer, integrating with nginx: https://stackoverflow.com/questions/45079988/kubernetes-ingress-vs-load-balancer – Fabio Manzano Sep 06 '18 at 12:49
0

A very simple example of how to balance your backend pods using Kubernetes service is mentioned here

Your replicas should be managed itself by kubernetes as mentioned in the link i.e by create your pods somewhat like mentioned in below example and then follow steps to create the service pointing to these pods

kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0  --port=8080

By doing this, kubernetes will ensure the load is distributed evenly among all your running pods.

In your case, you might want to look at the way you created your pods and services. One way to be sure that you have got your services correctly setup is by running below command , the result should give you multiple ENDPOINTS i.e set of : pairs pointing to your individual replica pods, something like in the example displayed below.

kubectl get endpoints --all-namespaces

NAMESPACE     NAME                      ENDPOINTS                                                  AGE
kube-system   kube-dns                  10.244.0.96:53,10.244.0.97:53,10.244.0.96:53 + 1 more...   1d

Well , if you are really interested in setting an nginx ingress , this would be a good start. But, a simple LoadBalancer within kubernetes service should suffice your current requirement

fatcook
  • 946
  • 4
  • 16