1

I am doing ab testing of my application, Where I send 3000 concurrent requests and total of 10000 requests. My application is a spring boot application with actuator and I use kubernetes and docker for containerization. During the testing my actuator end points take longer time than expected, due to this my pods restart and requests start failing.

Now I have stopped liveness probe and during the test if I manually hit the actuator endpoint, I can see that it takes a lot of time to respond back and sometimes it does not even returns the result and just stucks.

I can see that each request is served within 10 millisecond by my application, as per the logs. But the AB test results are completely different. Below are the results from AB test:

Concurrency Level:      3000
Time taken for tests:   874.973 seconds
Complete requests:      10000
Failed requests:        6
   (Connect: 0, Receive: 0, Length: 6, Exceptions: 0)
Non-2xx responses:      4
Total transferred:      1210342 bytes
Total body sent:        4950000
HTML transferred:       20580 bytes
Requests per second:    11.43 [#/sec] (mean)
Time per request:       262491.958 [ms] (mean)
Time per request:       87.497 [ms] (mean, across all concurrent requests)
Transfer rate:          1.35 [Kbytes/sec] received
                        5.52 kb/s sent
                        6.88 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  372 772.5      0    3051
Processing:  1152 226664 145414.1 188502  867403
Waiting:     1150 226682 145404.6 188523  867402
Total:       2171 227036 145372.2 188792  868447

Percentage of the requests served within a certain time (ms)
  50%  188792
  66%  249585
  75%  295993
  80%  330934
  90%  427890
  95%  516809
  98%  635143
  99%  716399
 100%  868447 (longest request)

Not able to understand this behaviour, as it shows approximately only 11.43 requests are served within a second, which is very low. What could be the reason ? Also What should be the way to keep the liveness probe ?

I have below properties set in my application.properties:

server.tomcat.max-connections=10000
server.tomcat.max-threads=2000
Lakshya Garg
  • 736
  • 2
  • 8
  • 23
  • You are doing this testing from Internet? If you are sure that your application is running quickly then you may launch one relatively fat pod and run ab test from inside Kubernetes. That way you may understand is it a problem of Kubernetes or not. – Vasili Angapov Apr 25 '19 at 04:35
  • No I am doing the ab test from inside the kubernetes pod itself. The result above is from the pod itself. – Lakshya Garg Apr 25 '19 at 04:38
  • even if I run the application on a separate machine where only this application is run (non containerized) and do the ab test locally, the results are almost same. – Lakshya Garg Apr 25 '19 at 04:41
  • that means that your application is just very slow. Try increasing Kubernetes resources for it. Read more here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ – Vasili Angapov Apr 25 '19 at 04:46
  • As I have already mentioned, I have run the application on a dedicated server with 4 GB ram and got almost the same result. Insight among the application, my endpoint does not interact with any other infrastructure, and as already mentioned request get served from application within 10ms. – Lakshya Garg Apr 25 '19 at 04:49
  • It seems your results are exactly as expected server.tomcat.max-connections=10000 during 874.973 seconds. So Please verify your settings. You can find more helpful information [here](https://stackoverflow.com/questions/24678661/tomcat-maxthreads-vs-maxconnections) and [here](https://stackoverflow.com/questions/24678661/tomcat-maxthreads-vs-maxconnections), tomcat [modes](https://stackoverflow.com/questions/11032739/what-is-the-difference-between-tomcats-bio-connector-and-nio-connector) – Mark Jun 17 '19 at 14:48
  • @Hanx this question is about the behaviour during load test, not about the max-connections property. max-connections and max-threads are just the information I provided, if in case required. – Lakshya Garg Jun 18 '19 at 03:21
  • As mentioned in the above posts, "__The same results are for non containerized env.__" Please consider different application and tomcat configuration and different liveness probes: __command, http request, or tcp socket__ and readiness parameters. In addition "_stopped liveness probe and during the test..., I can see that it takes a lot of time to respond back....it does not even returns the result and just stucks_" As per [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). " it's expected behavior. – Mark Jun 18 '19 at 07:18

0 Answers0