10

As developers we wrote microservices on Azure Service Fabric and we can run them in Azure in some sort of PaaS concept for many customers. But some of our customers do not want to run in the cloud, as databases are on-premises and not going to be available from the outside, not even through a DMZ. It's ok, we promised to support it as Azure Service Fabric can be installed as a cluster on-premises.

We have an API-gateway microservice running inside the cluster on every virtual machine, which uses the name resolver, and requests are routed and distributed accordingly, but the API that the API gateway microservice provides is the entrance for another piece of client software which our customers use, that software runs outside of the cluster and have to send requests to the API.

I suggested to use an Load Balancer like HA-Proxy or Nginx on a seperate machine (or machines) where the client software send their requests to and then the reverse proxy would forward it to an available machine inside the cluster.

It seems that is not what our customer want, another machine as load balancer is not an option. They suggest: make the client software smarter to figure out which host to go to, in other words: we should write our own fail-over/load balancer inside the client software.

What other options do we have?

PS: The client applications do not send many requests per second, maybe a few per minute.

David Makogon
  • 69,407
  • 21
  • 141
  • 189
rfcdejong
  • 2,219
  • 1
  • 25
  • 51
  • Not clear what software sends requests where. How software separated by layers and where is SF, client software and databases in these layers. – cassandrad Mar 23 '17 at 15:35
  • There is a cluster of machines which host a web api service, so 10.0.0.1:8100, 10.0.0.2:8100 and 10.0.0.3:8100 are all the same web api entrance. The client is outside the cluster and has to go to one of the entrances. – rfcdejong Mar 24 '17 at 14:26
  • What was the final solution that you implemented in this case ?? @rfcdejong – Aravind Jun 13 '17 at 07:18
  • Nothing really, I wasn't able to setup the cluster a single IP in the environment, but the customer was going to try as they have a vmware environment which should work. – rfcdejong Jun 13 '17 at 09:35

3 Answers3

4

Very similar problem, we have a many services and Service Fabric Cluster that runs on-premises. When it's time to use the load balancer we install IIS on the same machine where Service Fabric cluster runs. As the IIS is a good load balancer we use IIS as a reverse proxy only for API Gateway. Kestrel hosting is using for other services that communicate by HTTP. The API gateway microservice is the single entry point for all clients and has always static URI inside SF, we used that URI to configure IIS

If you do not have possibility to use IIS then look at Using nginx as HTTP load balancer

Roman Marusyk
  • 23,328
  • 24
  • 73
  • 116
  • This sprint I'm going to configure the NLB Feature on each of the machine and hopefully they get one single IP or DNS. Is that IIS that does that as well or does the NLB Feature require that? I'm happy to give you the bounty if you also include a good reference to a blog about this. Noone has a better answer until now. – rfcdejong Mar 28 '17 at 11:46
  • No reference for the IIS in the cluster as proxy? – rfcdejong Mar 29 '17 at 22:52
  • Sorry for the late answer. Yeah, very little information about that. There is not a one good reference, we many researched and glue together a lot of different parts into one. We used only IIS without any external LB. Try to create issue here: https://github.com/Azure/service-fabric. I think the guys from microsoft will suggest you something good. P.S I'm just trying to give you advice and the bounty doesn't matter – Roman Marusyk Mar 30 '17 at 22:31
  • 1
    I've been using IIS as a reverse proxy for kestrel also, worked nice, but that was only in dev – 4c74356b41 Mar 31 '17 at 08:00
  • The same situation, all services are hosted in Kestrel except API Gateway – Roman Marusyk Mar 31 '17 at 08:25
  • 1
    I'm not sure if and how to configure IIS as a single IP for the outside. I would rather think that it is the NLB feature from Windows that is able to do that. Just everywhere I look I see nginx, even for docker https://stefanprodan.com/2016/nginx-reverse-proxy-aspnetcore-docker-swarm/ – rfcdejong Mar 31 '17 at 23:09
  • 1
    You did your best trying to give an answer, so I wil give you the points before they are gone and wasted hehe.. even tho it was not an answer I was hoping for – rfcdejong Mar 31 '17 at 23:09
  • 1
    I tried to configure a NLB https://robertsmit.wordpress.com/2014/08/20/create-a-new-network-load-balancing-nlb-cluster-on-windows-server-2012-r2-winserv-nlb/ but somehow, our test environment is in azure IaaS and I'm unable to ping the IP from outside the cluster on the same subnet. Even with unicast and multiple NIC's in the VM's. Sadly I do not have an VMware environment to test it on. Anyway the customer is happy enough to use Round Robin DNS as "poor-man's Load Balancer". – rfcdejong Apr 04 '17 at 10:12
  • 1
    You can describe your solution as an answer here and accept it. I think it would be helpful to share your experience because there aren't many information in the internet – Roman Marusyk Apr 04 '17 at 21:05
  • how do you implement Round Robin DNS Load Balancer? can u explain or post as an answer. @rfcdejong – decoder May 14 '17 at 09:42
  • or how to configure IIS as a single IP for the outside @MegaTron – decoder May 14 '17 at 09:46
  • I have a similar set up with ngnix external load balancer talking to the Service fabric reverse proxy. The issue is when I increase the instances per node the connection from ngnix to fabric starts reverse-proxy starts failing. With dynamic port Fabric is able to create multiple instances but somehow the fails to connect from ngnix to SF reverse proxy. – bomaboom Aug 29 '20 at 09:05
0

You don't need another machine just for HTTP forwarding. Just use/run it as a service on the cluster.

Did you consider using the built in Reverse Proxy of Service Fabric? This runs on all nodes, and it will forward http calls to services inside the cluster.

You can also run nginx as a guest executable or inside a Container on the cluster.

LoekD
  • 11,402
  • 17
  • 27
  • A reverse proxy within the cluster is what we are doing already, but it is for external access into the cluster. A client shouldn't explicitly route to one node. If the cluster had a single DNS name then yes, but it is with different IP addresses. – rfcdejong Mar 28 '17 at 11:42
  • If you run an RP instance on every node, it doesn't matter which node you connect to. Just take a random pick from all IP addresses. – LoekD Mar 28 '17 at 12:42
  • A random pick would require to configure the clients to know all the node IP's and could fail when one node is upgrading or otherwise unavailable – rfcdejong Mar 29 '17 at 08:50
  • 1
    Apply the Retry and Circuit Breaker design patterns at the client side. Otherwise you're stuck with an external LB/RP. – LoekD Mar 29 '17 at 09:36
  • Retry and Circuit Breaker design pattern to a single dns woud be ok, but not to a list of IP addresses. I was hoping for an answer of anyone with a better idea, something that I don't already know :) – rfcdejong Mar 29 '17 at 22:54
  • this is not a load balancing solution, this is actually an non-answer – I Stand With Russia Jul 06 '21 at 14:46
0

We have also faced the same situation when started working with service fabric cluster. We configured Application Gateway as Proxy but it would not provide the function like HTTP to HTTPS redirection.

For that, we configured Nginx Instead of Azure Application Gateway as Proxy to Service Fabric Application.