-1

I have a WebApi in AspNetCore which is deployed on an Azure App service. The WebApi can only be used for posting an array of objects. The object contains a lot of properties (100+) and 3 object arrays. (it represents a building). The object Arrays can contain over 300 objects each.

When I'm debugging in Visual studio 2017 (15.4.1) the api works fine and accepts all request. When deployed to Azure, it accepts most request. But a few disappear into the void. By that I mean there is no response whatsoever. The requests that go wrong have a lot of objects in one of the arrays. By reducing the amount of objects to a certain number, the request is accepted. As said, in visual studio all requests are being accepted. The data send is solid.

I have created a test controller, with the same interface, but no logics, only response "succes"! On this interface the requests fail as well (same behavior). Newtonsoft(10.0.3) is used by both controllers for deserialization.

What could cause this difference in behaviour with these request?

The code of the controller:

public class TestController : Controller
{       
    [HttpPost]
    [RequireHttps]
    public IActionResult Post([FromBody] Buildings Buildings)
    {
        enumActionResult status = enumActionResult.success;
        try
        {   
            if (Buildings == null)
                throw new Exception("No valid buildings posted!");

            if (Buildings.items == null)
                throw new Exception("No valid building items posted!");

            if (Buildings.items.Count == 0)
                throw new Exception("Zero building items posted!");


            return Json(new response() { status = status.ToString(), Buildings = list
        }

        catch (Exception ex)
        {
            return Json(new ErrorResponseFormat() { message = ex.Message, status = enumActionResult.error, data = new ErrorResponse(ex) });
        }
    }
}

Update(2017-12-19): The problem is still there, but is narrowed down to SSL. I created a new controller, which only returns the request:

 public class EchoController : Controller
    {
        [HttpPost]           
        public IActionResult Post()
        {
            return new FileStreamResult(Request.Body, Request.ContentType);
        }
    }

If I make a request with https on the URL which fails, and I fire the exact same request on the http URL it is succesfull. This is the same behaviour I got from the debugger, because that uses http only. So instead of a difference betweeen debugger and the azure cloud environment, the problem appears to be http versus https.

In the meantime I implemented in the config (as suggested by Bruce Chen):

    <system.webServer>
        <security>
          <requestFiltering>
            <!-- Measured in Bytes -->
            <requestLimits maxAllowedContentLength="4000000000" maxQueryString="100000000"/>        
            <!-- 1 GB, Byte in size, up to 2GB-->
          </requestFiltering>   
         <access sslFlags="Ssl, SslRequireCert"/>        
        </security>
      </system.webServer>

and upgraded to .NET 4.6.2. At this point still no solution, but the problem appears to be narrowed down.

Update(20-12-2017): I found the following interesting thing: When I send a "Large" request to the (https) api it is lost. But when it is preceded with a small request, both requests are accepted. So it looks like the connection setup with a large request fails.

The webapi is uses a custom domain, https and an IP-whitelist. The client needs to authenticate with a client certifificate. full Webapi config (Yes, I removed the previously requestLimits entry):

    configuration>
  <connectionStrings>
    ...
  </connectionStrings>
  <system.webServer>
    <security>      
       <access sslFlags="Ssl, SslRequireCert"/>   
    </security>
  </system.webServer>
   <runtime>
      <gcServer enabled="true"/>
   </runtime> 
</configuration>

And the Client is postman, using certificates. Any thoughts?

Martijn
  • 87
  • 8
  • 1
    We need to see some code, or logs, or something or we are just going to be guessing... – Milney Nov 01 '17 at 15:37
  • No one here is a psychic you need to post some code. Are you sure you're using a POST Request and Not a GET that is potentially being truncated. What do you mean newsoft is being used for deserialization, do you have a custom model binder or are you just sending JSON – johnny 5 Nov 01 '17 at 15:41
  • updated with code. No custom model binder is used – Martijn Nov 01 '17 at 15:59

2 Answers2

2

Finally found the awnser in an msdn blog.

https://blogs.msdn.microsoft.com/waws/2017/04/03/posting-a-large-file-can-fail-if-you-enable-client-certificates/

Our solution is the first option since the clients are not .NET applications. Feels like a cheat, but it works.

(below I copied the text from the link, for preservation, and because it is really hard to find):

Overview

If you require client certificates and POST or PUT a large amount of data, your request may fail. This has been an issue that has existed with IIS for at least 10 years (this now applies to Azure App Services on the Windows platform as well since it uses IIS). There is not much information on this so this blog should help clarify this issue. The solution is incredibly simple!

Issue

The underlying protocol for HTTPS (TCP) will break up large data packets into more than one frame. Normally this is not an issue for applications and transparent to both client and server. Some applications that require client certificates can run into a problem because of this… when the initial data is being pushed in multiple frames and the IIS server demands a client certificate before continuing. You can actually see this in a network trace when the data starts flowing from the client to the server (after the initially SSL handshake of the Server Hello) and the first packet of data is sent. The server reply is the start of the request for the client certificate and then the next packet coming from client contains more data. At this point the server throws an error because it expected the next data on the wire to be the client certificate. This does not happen with smaller packets of data because the entire request to POST or PUT has finished and the next thing the server gets IS the client cert handshake and not additional data from the PUSH or PUT

Resolution

To solve this issue, you simply need to utilize one of these two techniques:

  • Establish the connection first with a HEAD request
  • Set the Expect: 100-continue header for the request

Option 1: Establish the connection first with a HEAD request I don’t like this one as much as the next option, but it has worked successfully. The concept is the HEAD request has no data at all and establishes the SSL connection without any issues. Then you rely on the underlying implementation of your HTTP library to re-use this connection (most do), there is no further SSL handshake and you avoid the problem

Option 2: Set the Expect: 100-continue header with for the request This is the best option. According to the RFC this is why this actually exists! In simple terms, once the server is ready for the request (after the SSL negotiations in this case) you get a 100 status back and can continue to POST or PUT your data. All HTTP client libraries implement this (since it is part of the RFC) so this is simple and guaranteed to work. Below is a sample of setting this in .NET when using HttpWebRequest (or any of those derivatives). You do this once in your code before you make any HTTP Request in code (like in global.asax or Startup.cs ServicePointManager.Expect100Continue = true; See: ServicePointManager.Expect100Continue Property If you are using the newer HttpClient class adds this automatically (for obvious reasons).

Summary emphasized textAlthough this has been an issue for over a decade, this never found its way to any public facing documentation (most likely because the Expect header is best practice for us old-school types and the newer classes and libraries all include this header by default).

Martijn
  • 87
  • 8
  • So glad I found this. Had the exact same issue with hosting an ASP .NET 5 in Azure App Service. It was frustrating to see requests hang even though the requests were not that large. Thanks for doing all this research! – Jade L. Mar 10 '22 at 05:44
0

Per my understanding, your request may reach the IIS restriction. As the maxAllowedContentLength attribute for requestLimits described as follows:

Specifies the maximum length of content in a request, in bytes.

The default value is 30000000.

I assumed that you could increase the maxAllowedContentLength as follows:

<security>
  <requestFiltering>
    <!-- Measured in Bytes -->
    <requestLimits maxAllowedContentLength="1073741824" />  <!-- 1 GB, Byte in size, up to 2GB-->
  </requestFiltering>
</security>

I would recommend you check your request body size and try to increase the maxAllowedContentLength to narrow this issue. Moreover, I found a issue talking about request size limit in ASP.NET Core 2.0.0. Also, here is a similar issue, you could refer to here.

Community
  • 1
  • 1
Bruce Chen
  • 18,207
  • 2
  • 21
  • 35
  • The JSON request is not that big. A JSON request of 134,977 chars (+/- 136.95 KB) fails. I don't get any response whatsoever, only a timeout from Postman after 2 min. In the develop environment the response is within 0.5 seconds. I did apply your suggestions, but no change in behaviour – Martijn Nov 06 '17 at 14:01