0

I am using an internal load balancer in GCP to route the traffic between 2 VMs which are memory sensitive.
Currently for the testing purpose I am trying with only 1 VM. So all the time the traffic routed to single server.

I am using javascript in client to post queries to the load balancer. Example is as given below:

try {
   let response = await axios.post(LoadBalancerURL, data,{timeout : 100 * 1000});
   // process the response 
   return;
} catch {
   console.log("error in query : ", err);
}

For every query I have given the desired timeout at the client side so that if server does not response within 100 seconds I terminate the connection.

However in this case, the timeout is not getting and the request to the load balancer waits indefinitely and crashes the application.

I have also tested directly by calling the backend VMs, did not face the above issue mentioned.

It would be great help, if someone could help me with root cause and the solution for this issue.

Update 1
Updating the question with more details:

The post query is hosted in a GCP Cloud run service.
Currently I am testing with only one backend server in the load balancer setup, and also I tried sending the query when the server down. As expected I was getting the error response (without delay).

The load balancer setting is as given below:
Backend type : Instance Group
Protocol : HTTP
Timeout : 200 seconds
Rate Limit : I am not seeing any option in the GCP.

Update 2
I have performed few more testing, and below are the findings.

Below is the screenshot from Loadbalancer logs, I see that it is responded back to client (cloud run service after 4.106 seconds).
enter image description here

However, I do not see the same response in the axios response.

Update 3

Load balancer backend configuration is as given below:
Balancing Mode is set to utilization with below values.

  • Maximimum backend utilization - 80%
  • Mamimum RPS - 2
  • Score - per instance
  • Capacity - 100%

Thank you,
KK

KK2491
  • 451
  • 1
  • 8
  • 25
  • Where are the post queries hosted? Would that be the third vm instance going to load balancer? Once the server is down can you try to ssh on the server and share the result. Can you also share what type of load balancer you use? Lastly, can you share the rate limit you set if your internal balancer is https and what type of instance group you are using. – Yvan G. Apr 25 '23 at 23:44
  • @YvanG. I have updated the question with the details you asked for. I am not able to find the rate limit option in the internal load balancer. Could you please let me know where exactly I can get that. – KK2491 Apr 26 '23 at 05:14
  • Have you tried ssh when the server is down? If yes, please share the error message. For the rate limit or utilization you can see its information once you click your internal load balancer then click edit. Then go to backend configuration then click the pencil icon then click the dropdown button. It will show your balancing mode information. Please share the information from the balancing mode. – Yvan G. Apr 26 '23 at 22:09
  • Backend server does not go down when this issue occurs. However I can still see if I can access the server or not during that time window. Regarding the rate limit I have updated my question. Balancing mode is set to Utilization, value set is 80. – KK2491 Apr 27 '23 at 03:07

0 Answers0