8

Recently came to know about Azure Service Fabric and seems a good way to develop scaling applications as bunch of micro services. Everywhere it is telling that we need to have stateless front end web services and stateful partitioned internal services. The internal services scale by partitioning the data.

But what happens to the front end services in load? .The chances are very less as they are doing nothing but relying to internal stateful services. Still should we use load balancer before front end services? If so, can we host the same too via Service Fabric's Stateless model using OWIN or any other web host?

The question is asked already in SO but as comment. It didnt get reply as the original question was different. Azure Service Fabric usage

Community
  • 1
  • 1
Joy George Kunjikkuru
  • 1,495
  • 13
  • 27

1 Answers1

11

Yes, you'll definitely want to distribute load across your stateless services as well. The key difference is that since they are stateless, they can handle requests in a round-robin fashion.

Whereas stateful services have partitions, which map to individual chunks of the service's state, stateless services simply have instances, which are identical clones of each other, just on different nodes. You can set the number of instances in on the default service definition in the application manifest. For instance, this declaration will ensure that there are always 5 instances of your stateless service running in the cluster:

<Service Name="Stateless1">
   <StatelessService ServiceTypeName="Stateless1Type" InstanceCount="5">
     <SingletonPartition />
   </StatelessService>
</Service>

You can also set the InstanceCount to -1, in which case Service Fabric will create an instance of your stateless service on each node.

The Azure load-balancer will round-robin your incoming traffic across each of your instances. Unfortunately, there isn't a good way to simulate this in a one-box environment right now.

Sean McKenna
  • 3,706
  • 19
  • 19
  • Why can't we simulate it in a one-box environment? When I tried to add more instances to a service with a web endpoint (TCP/port 80), it complained about port 80 being busy. Is that the problem? – rmac Jul 28 '15 at 07:21
  • 2
    Two reasons: 1) The Azure load balancer is external to Service Fabric and there is no local equivalent. 2) You can't have multiple processes listening on the exact same endpoint/port on a single machine. – Sean McKenna Jul 28 '15 at 18:17
  • @SeanMcKenna-MSFT the second statement by itself is not true as windows have TCP port sharing feature allowing many processes to listen on the same port but with different URI. Seems like if the URI has some unique id then you can solve this. Or if possible have them listen on ephemeral ports and auto register to something like nginx locally which listens on port 80 the does the round robin to the registered local instances. just brain storming after seeing these comments.... – CJ Harmath Oct 30 '15 at 20:40
  • @Csaba: Yes, when I said "exact same endpoint/port", I meant full URI path. So you can have one process handle http://localhost:8081/foo and another handle http://localhost:8081/bar but not two simultaneously handling one or the other. You could let Service Fabric assign a dynamic port to each instance and then build your own mini-load balancer as you suggest. However, that itself is an inaccurate representation of the real environment and is somewhat harder to configure differently for local vs. cloud. – Sean McKenna Oct 31 '15 at 17:51
  • 1
    It's also important to understand that Azure Load Balancer targets every machine in your cluster. When you use e.g InstanceCount=2 instead of InstanceCount=-1, Azure Load Balancer does not know, on which machines these instances are placed, so by default it would also route traffic to machines that don't have the service. You would have to rely on custom probes per service (which results in errors whenever SF moves a service) or you could use separate nodeTypes and bind the services and load balancer to this pool. However, this also has drawbacks and makes the cluster setup more complex IMO. – Christian Weiss Aug 20 '16 at 09:15