3

We have a service fabric application which comprises of a single service. This service exposes HTTP endpoints via Web Api on port 8910

we deploy this application to different instances and use the following code to prevent port clashes between our service instances

    protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
    {
        var settingsResolver = SettingsResolver.GetResolver();

        return new ServiceInstanceListener[]
        {
            new ServiceInstanceListener(serviceContext => 
                new OwinCommunicationListener(
                    startup => new OwinBuilder(settingsResolver, baseLogger)
                                .Configure(startup), 
                    serviceContext, 
                    ServiceEventSource.Current, 
                    "ServiceEndpoint",
                    serviceContext.ServiceName.Segments[1]))
        };
    }

serviceContext.ServiceName.Segments[1] resolves to the name of the application instance

The service manifest has the port configuration

<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="8910" />

However when we run two instances within the same Azure cluster we randomly get 503 errors connecting to our endpoints. These resolve themselves eventually - but I am wondering if there are any additional steps I need to setup in order to handle multi-instance applications with port sharing?

Richard Baldwin
  • 111
  • 1
  • 5
  • 1
    Are you creating multiple instances of the service within a single application instance, or multiple application instances? – Vaclav Turecek Mar 15 '17 at 19:52
  • I'm experiencing the same issues with random dropouts from web services resulting in 503 errors. Did you ever come to a conclusion what gave you the errors? Worth mentioning is that I have 5 nodes (smallest VM) with multiple applications and services spread out among them, so it may be that the servers are congested. – honk Jul 06 '17 at 12:37
  • I'm confused about your "port clashes" I've been running asp.net core web api on a 5 node cluster for almost a year in a prod environment with no issues. That 503 sounds more like what 503 really means, you don't have enough resources to handle the requests. I noticed a big difference in performance when going from 2 core to 4 core machines when I originally got started with Service Fabric. Regardless, you should test that theory, temporarily beef up those machines and see if you still have issues. Since there is a load balancer in play I'm confused about your port handling issue. – The Muffin Man Aug 19 '17 at 06:55
  • Did you ever find the solution to this? Or may be what was causing it? We have SF cluster where we have applied placement constraints on the services. So that we exposed services are placed on couple of nodes and the services that perform backend batch processing on other nodes. We keep on getting random 503. It works well when I have web exposed services on all nodes. Not sure if Load Balancer is routing the request to the node where we dont have any web exposed services. – Dharmesh Tailor Jul 13 '21 at 23:39

0 Answers0