There is a drop in hits per second in our load tests from jmeter after about 2 hours. Jmeter is running non gui in distributed fashion. My thread count per minute is 25k and we run with 6 servers.
Inorder to rule out the jmeter as the bottleneck , i checked if Jmeter has a connection pool with a size limit.
But Jmeter seems to maintain http connection pools per thread. As per Does JMeter pool HTTP connections?.
Its clear that the default value is 2000 milliseconds as per httpclient4.time_to_live property in jmeter.properties
# TTL (in Milliseconds) represents an absolute value.
# No matter what, the connection will not be re-used beyond its TTL.
#httpclient4.time_to_live=2000
Version of jmeter: 4.0.0 which uses Http4 client implementation
This means Jmeter creates connections for each thread which times out in 2 seconds.
The receiving jetty web server has following connection pool values:
maxThreads = 3200
minthread = 656
idleTimeout = 60000
Is it possible that the drop in hits per second from jmeter is induced due to the slow / no response from jetty server.? Are there any rules around matching the Jmeter thread count with the target web server's connection pool. ?
Note: I understand that the throughput generally depends on the response from the Application Under Test. But will Jmeter's hits per second to the target server be in any way be affected by the Application under test.?
Update: From https://stackoverflow.com/a/40689714/1165859, its clear that there is a correlation between Hits per second and Application Under Test. But my issue is that everythings fine for the first 2 hours. After which the Hits per second drops.