I use a VM scale set for my node application. My app has an action which is public accessible via www.mydomain.com/api/healthcheck
and prints just some json.
When I configure my health probe to use TCP
protocol, everything works fine and also my api returns me the expected json (and status 200).
However, when I now switch my health probe to use HTTP
and path=/api/healthcheck
, my website isn't accesible anymore (ERR_CONNECTION_TIMED_OUT
... I guess the loadbalancer takes out all instances because the health probe tells him that every instance is unhealthy)
I use nginx in front of my node app, but I also tried (for testing) to configure my LoadBalancer to route port 80 to backendport 8080 (where my node app is running on every machine, so I can avoid nginx proxy). But I get the same behaviour.
I'm out of ideas why my custom health check doesn't work. Hope you can help.
Edit: For testing, I did the following:
- run another nodejs app on port 3000 on every VM, which just prints "hello world" (without nginx proxy!)
- create a LB rule for port 3000 and also configure my NSG to allow :3000 for all
- at the beginning, my health probe is configured to use
tcp
- result:
mydoamin.com:3000/hello
is available (prints hello and returns 200) - now I configure my health probe to use
http
-protocol, port3000
and location/hello
. - result: my whole web app isn't available anymore