Our web application (hosted in a Web App in Azure) experiences spikes in HTTP Queue Length. Each time there is a spike in HTTP Queue Length, the web application crashes and we either have to wait for Azure to restart the web app itself, or we restart the web app ourselves. This happens very often.
The web application does use SignalR, and a Web Job is running that calls a method on the Hub which then broadcasts data to connected clients. There is only ever a handful of users at this stage, so we have not implemented a SignalR backplane.
Here is an example of the spikes in HTTP Queue Length
Note, we tried having the web application in its very own Application Service Plan (P3) and it still exhibited the same behaviour. The memory percentage was much lower than that shown here though, around the 20-40 percent, but still crashed with regular spikes in HTTP Queue Length. Thus, I don't believe it's a memory issue that is causing the issue.
After a while of trying to diagnose this issue, we decided to then host the application (same code) into a VM (still in Azure) and change the URL to point to the VM instead of the web app. The new VM is only very basic, with only 3.5GB Memory.
Since moving to a VM, the application is performing great, no crashes and performs much better than in a Web App with a large dedicated service plan.
So it is difficult to say it is the code, when we running perfmon and other indicators, all memory and queue lengths seem to quickly drop down after serving requests. Whereas in a WebApp, this seemed to continually grow until it crashed.
Just wondering if anyone else has experienced this behaviour with Web Apps? We are going to continue hosting in a VM, but originally preferred hosting within a Web App as PaaS is more appealing.
In case it helps, more information on the tech stack is: HTML5, C#, Web API 2, Kendo MVVM, SignalR, Azure SQL Server, Web Jobs processing Service Bus Topics.
Kind regards,
Stefan