2

title might be seen a bit vague, here is the details of my problem. I have a web application which consists of 3 layers, an angular front-end project, .NET Core api gateway middleware project and .NET project as back-end layer. Those all 3 projects are seperate from each other and work fine. My problem is that at most of the endpoints that I have in my project, requests get completed as expected in ms levels and returns data but a few of them seems to be waited an irrefutable time and this happens randomly. Chrome response timing output for the mentioned GET request is shown below.

Chrome timing details

Chrome response timing details

As you can see above, most of the time spent here is for waiting. Also, I tracked this kinda requests in the stackify, and it seems that request is completed in ms levels in the back-end project. Stackify output is shown below.

Stackify trace output stackify trace

As shown above my back- end response is at longest took 835ms and returned appropiate response. Lastly, from the logs that my gateway produces in Kibana, i had the following timing log that also shows that request had taken ~8 seconds.

Api Gateway Log enter image description here

To sum up the issue, the response is waited at my api gateway project (.NET Core) and I have no clue why this is happening randomly at some endpoints. Lastly, I just send request to the related end-points in my api gateway project and no operations is made. To understand and fix this issue, any help and suggestion is appreciated in advance.

Hasan
  • 1,243
  • 12
  • 27

2 Answers2

2

As mentioned in this blog post, it turns out that the latency for the first requests is causing from possibly by the services registered as Singleton. To solve this, linked blog post suggests that creating a warm-up startup function is solution to this problem while I mostly registered my services Transient.

Besides, I have observed this helpful blog post in the comments in the previously mentioned blog post that suggests a solution as follows.

The only reliable solution I could find is not very scientific: after deployment, simply send a couple of warmup requests to the endpoints of the api.

As described above, the solution for me is to send actual http request to my api project. Hope this helps those who have the same problem like I did.

Hasan
  • 1,243
  • 12
  • 27
0

If the app goes offline, it will take a few seconds to go online once again, in most cases this is why TTFB (time to first buffer) takes to much time to response.

Another possible case; if the request is used to get images, the reason could be the size of images.

Last but not least, you may try to reduce the server back-end workload.

this post has the same issue: How can I reduce the waiting (ttfb) time

LazZiya
  • 5,286
  • 2
  • 24
  • 37
  • Thank you for your answer. But for the sake of demonstrating the problem, image size was so small. I will check what you pointed out. – Hasan Jan 11 '19 at 08:49
  • btw, if the reason is due to being offline; you may use a web service to keep you app alive, e.g. "uptime robot"; it pings the app frequently to check its availability, and you can manage the frequency ~5-10 min, it is enough to keep your app alive. – LazZiya Jan 11 '19 at 09:24
  • Also, tried to ping server for a long time and it seems to be up all the time. But similarly, some ping responses took 6ms while majority of pings returned in 1ms. – Hasan Jan 11 '19 at 10:25