4

If I setup tomcat and stream a static file from it, I've noticed that if the client "pauses" (stops receiving) from that socket for anything > 20s then tomcat appears to sever the connection arbitrarily (even though the request URI headers have been received and the connection is still "connected" [client is still alive]). What configuration parameter controls this? The documentation mentions connectionTimeout but only in relation to initial header parsing and reading the request body, not reading the server's response [?] Is there some kind of inactivity timeout going on here?

It is reproducible, stream a (large) static file from any tomcat app, and receive it with a client that pauses, ex test.rb:

require "socket"
host = "localhost"
port = 8080
socket = TCPSocket.new host,port
url = "/your_webapp/large_static_filename.ext"
request = "GET #{url} HTTP/1.0\r\nHost:#{host}\r\n\r\n"
socket.print request
puts "reading"
response = socket.sysread 1_000_000
puts response.length
puts response[0..300]
puts "sleeping 25" # with 10s or several reads separated by 10s, it is OK
sleep 25
response2 = socket.read
# this should equal the total size, but doesn't...
puts "sum=#{response2.length + response.length}"

It works fine with other servers, so probably not some kind of OS limit at play. It's just vanilla Tomcat so no mod_jk or workers are at play...

rogerdpack
  • 62,887
  • 36
  • 269
  • 388
  • Sounds vaguely like my question [Where does the socket timeout of 21000 ms come from?](https://stackoverflow.com/questions/26896414/where-does-the-socket-timeout-of-21000-ms-come-from) In my case it turned out to be a device-specific Android thing, but some of the other answers might be relevant. – Kevin Krumwiede Jul 28 '17 at 17:10

1 Answers1

5

The only thing that affected this "inactivity timeout" appears to be the

<Connector port="8080" ... connectionTimeout=30000 /> setting.

And only if it's trying to actively 'send data' onto the wire (but can't because the client is actively refusing it or if the connection has been lost). If the servlet is just busy doing cpu in the background, then writes to the wire (and it's received or buffered by the kernel), no problem, it can exceed the connectionTimeout, so it's not this.

My hunch is that Tomcat seems to have a "built in" (undocumented? not able to be specified separately?) write timeout setting, which defaults to connectionTimeout value, ex (from the tomcat source, randomly selected):

java/org/apache/tomcat/util/net/NioEndpoint.java
625:            ka.setWriteTimeout(getConnectionTimeout());

Now whether this is "bad" or not is subject to interpretation. Running into this "severing" of the connection by tomcat occurs after either the TCP channel has been disrupted somehow (enough to stop transfer) or the client is "blocking" on receiving the bytes, FWIW...

FWIW connectionTimeout setting affects many things:

The total amount of time it takes to receive an HTTP GET request.
The total amount of time between receipt of TCP packets on a POST or PUT request.
The total amount of time between ACKs on transmissions of TCP packets in responses.

and now apparently also a writeTimeout.

End result: we had a flakey network so these are "expected" timeouts/severed connections (via a config with a different name LOL).

rogerdpack
  • 62,887
  • 36
  • 269
  • 388