8

What thresholds should be set in Service Fabric Placement / Load balancing config for Cluster with large number of guest executable applications?

I am having trouble with Service Fabric trying to place too many services onto a single node too fast.

To give an example of cluster size, there are 2-4 worker node types, there are 3-6 worker nodes per node type, each node type may run 200 guest executable applications, and each application will have at least 2 replicas. The nodes are more than capable of running the services while running, it is just startup time where CPU is too high.

The problem seems to be the thresholds or defaults for placement and load balancing rules set in the cluster config. As examples of what I have tried: I have turned on InBuildThrottlingEnabled and set InBuildThrottlingGlobalMaxValue to 100, I have set the Global Movement Throttle settings to be various percentages of the total application count.

At this point there are two distinct scenarios I am trying to solve for. In both cases, the nodes go to 100% for an amount of time such that service fabric declares the node as down.

1st: Starting an entire cluster from all nodes being off without overwhelming nodes.

2nd: A single node being overwhelmed by too many services starting after a host comes back online

Here are my current parameters on the cluster:

       "Name": "PlacementAndLoadBalancing",
       "Parameters": [
         {
           "Name": "UseMoveCostReports",
           "Value": "true"
         },
         {
           "Name": "PLBRefreshGap",
           "Value": "1"
         },
         {
           "Name": "MinPlacementInterval",
           "Value": "30.0"
         },
         {
           "Name": "MinLoadBalancingInterval",
           "Value": "30.0"
         },
         {
           "Name": "MinConstraintCheckInterval",
           "Value": "30.0"
         },
         {
           "Name": "GlobalMovementThrottleThresholdForPlacement",
           "Value": "25"
         },
         {
           "Name": "GlobalMovementThrottleThresholdForBalancing",
           "Value": "25"
         },
         {
           "Name": "GlobalMovementThrottleThreshold",
           "Value": "25"
         },
         {
           "Name": "GlobalMovementThrottleCountingInterval",
           "Value": "450"
         },
         {
           "Name": "InBuildThrottlingEnabled",
           "Value": "false"
         },
         {
           "Name": "InBuildThrottlingGlobalMaxValue",
           "Value": "100"
         }
       ]
     },

Based on discussion in answer below, wanted to leave a graph-image: if a node goes down, the act of shuffling services on to the remaining nodes will cause a second node to go down, as noted here. Green node goes down, then purple goes down due to too many resources being shuffled onto it.

A graph demonstrating the above. Green goes down, then purple behind it

jcolebrand
  • 15,889
  • 12
  • 75
  • 121

1 Answers1

3

From SF's perspective, 1 & 2 are the same problem. Also as a note, SF doesn't evict a node just because CPU consumption is high. So: "The nodes go to 100% for an amount of time such that service fabric declares the node as down." needs some more explanation. The machines might be failing for other reasons, or I guess could be so loaded that the kernel level failure detectors can't ping other machines, but that isn't very common.

For config changes: I would remove all of these to go with the defaults

 {
   "Name": "PLBRefreshGap",
   "Value": "1"
 },
 {
   "Name": "MinPlacementInterval",
   "Value": "30.0"
 },
 {
   "Name": "MinLoadBalancingInterval",
   "Value": "30.0"
 },
 {
   "Name": "MinConstraintCheckInterval",
   "Value": "30.0"
 },

For the inbuild throttle to work, this needs to flip to true:

     {
       "Name": "InBuildThrottlingEnabled",
       "Value": "false"
     },

Also, since these are likely constraint violations and placement (not proactive rebalancing) we need to explicitly instruct SF to throttle those operations as well. There is config for this in SF, although it is not documented or publicly supported at this time, you can see it in the settings. By default only balancing is throttled, but you should be able to turn on throttling for all phases and set appropriate limits via something like the below.

These first two settings are also within PlacementAndLoadBalancing, like the ones above.

 {
   "Name": "ThrottlePlacementPhase",
   "Value": "true"
 },
 {
   "Name": "ThrottleConstraintCheckPhase",
   "Value": "true"
 },

These next settings to set the limits are in their own sections, and are a map of the different node type names to the limit you want to throttle for that node type.

{
"name": "MaximumInBuildReplicasPerNodeConstraintCheckThrottle",
"parameters": [
  {
      "name": "YourNodeTypeNameHere",
      "value": "100"
  },
  {
      "name": "YourOtherNodeTypeNameHere",
      "value": "100"
  }
]
},
{
"name": "MaximumInBuildReplicasPerNodePlacementThrottle",
"parameters": [
  {
      "name": "YourNodeTypeNameHere",
      "value": "100"
  },
  {
      "name": "YourOtherNodeTypeNameHere",
      "value": "100"
  }
]
},
{
"name": "MaximumInBuildReplicasPerNodeBalancingThrottle",
"parameters": [
  {
      "name": "YourNodeTypeNameHere",
      "value": "100"
  },
  {
      "name": "YourOtherNodeTypeNameHere",
      "value": "100"
  }
]
},
{
"name": "MaximumInBuildReplicasPerNode",
"parameters": [
  {
      "name": "YourNodeTypeNameHere",
      "value": "100"
  },
  {
      "name": "YourOtherNodeTypeNameHere",
      "value": "100"
  }
]
}

I would make these changes and then try again. Additional information like what is actually causing the nodes to be down (confirmed via events and SF health info) would help identify the source of the problem. It would probably also be good to verify that starting 100 instances of the apps on the node actually works and whether that's an appropriate threshold.

masnider
  • 2,609
  • 13
  • 20
  • To clarify, I did test a version of the config with InBuildThrottlingEnabled set true. With your recommendations on the cluster. It did not get the cluster to spin up fast enough. It will spin up a set of services, then pause for an hour then start trying to spin up another set. With the posted config, I was attempting to a different approach to get the cluster to start less number of services at a time but a quicker interval between service start rounds. There is a an entry from platform_events when the node is declared down eventName: "NodeClosed", category: "StateTransition" error: "S_OK" – George Whiting Jun 29 '20 at 18:13
  • I can run 280 of these services on the same size server as normal windows services, services are started in batches of 5-10 over a period of 5 minutes – George Whiting Jun 29 '20 at 21:04
  • NodeClosed is a graceful transition so something else is causing that. If the node were crashing it wouldn't be an intentional close action (when SF boots a node out of the cluster you would see Lease level failures). If you can share some of the platform events that might help. Tracking what is causing the nodes to go down should probably be its own question though? – masnider Jun 30 '20 at 19:28
  • Part of the issue here is also that SF considers these services being down as "constraint violations". Many of the throttles do not apply and only take effect during "proactive rebalancing". We will probably need to add the MaximumInBuildReplicasPerNodeConstraintCheckThrottle (an internal setting today, but you can see it in the code). I will update the answer and you can try it out. – masnider Jun 30 '20 at 19:36
  • 1
    @masinder Sorry for the late reply testing took a while. This has worked well. Ended up settling on 50 for constraint and placement throttling. The linked settings page has lots of un-documented settings to explore. – George Whiting Jul 13 '20 at 16:10