3

I am using an InputStream to stream a file over the network.

However if my network goes down the the process of reading the file the read method blocks and is never recovers if the network reappears.

I was wondering how I should handle this case and should some exception not be thrown if the InputStream goes away.

Code is like this.

Url someUrl = new Url("http://somefile.com");
InputStream inputStream = someUrl.openStream();
byte[] byteArray = new byte[];
int size = 1024;
inputStream.read(byteArray,0,size);

So somewhere after calling read the network goes down and the read method blocks.

How can i deal with this situation as the read doesn't seem to throw an exception.

madlad
  • 215
  • 6
  • 15
  • 1
    related? http://stackoverflow.com/questions/804951/is-it-possible-to-read-from-a-java-inputstream-with-a-timeout – Mikeb Dec 22 '11 at 15:54
  • possible duplicate of [Resume http file download in java](http://stackoverflow.com/questions/6237079/resume-http-file-download-in-java) – Oleg Mikheev Dec 22 '11 at 15:56
  • Try using [`openConnection`](http://docs.oracle.com/javase/1.5.0/docs/api/java/net/URL.html#openConnection()) and then modfiy the returning `URLConnection` object before opening the stream. – home Dec 22 '11 at 15:57

5 Answers5

0

The function inputStream.read() is blocking function and it should be called in a thread. There is alternate way of avoiding this situation. The InputStream also has a method available(). It returns the number of bytes that can be read from the stream.

Call the read method only if there are some bytes available in the stream.

int length = 0;
int ret = in.available();
if(ret != 0){           
   length = in.read(recv);
}

InputStream does throw the IOException. Hope this information is useful to you.

  • Be aware that relying upon `available()` can be an issue. It will still return `0` even when the socket has been closed. The only way to tell if the connection has terminated from the other end is to try and read from it; which is blocking. – Zhro Nov 26 '16 at 00:45
0

From looking at the documentation here: http://docs.oracle.com/javase/6/docs/api/java/io/InputStream.html

It looks like read does throw an exception.

There are a few options to solve your specific problem.

One option is to track the progress of the download, and keep that status elsewhere in your program. Then, if the download fails, you can restart it and resume at the point of failure.

However, I would instead restart the download if it fails. You will need to restart it anyway so you might as well redo the whole thing from the beginning if there is a failure.

Alan Delimon
  • 817
  • 7
  • 14
0

The short answer is to use Selectors from the nio package. They allow non-blocking network operations.

If you intend to use old sockets, you may try some code samples from here

WeMakeSoftware
  • 9,039
  • 5
  • 34
  • 52
  • 1
    That's really the extremely long answer because its vastly more code to write to do this. And the only way you can deal with a server that stopped responding is to wait X units of time and give up. Something that TCP already gives us even in Java's Blocking IO library. – chubbsondubs Dec 22 '11 at 16:23
0

Have a separate Thread running that has a reference to your InputStream, and have something to reset its timer after the last data has been received - or something similar to it. If that flag has not been reset after N seconds, then have the Thread close the InputStream. The read(...) will throw an IOException and you can recover from it then.

What you need is similar to a watchdog. Something like this:

public class WatchDogThread extends Thread
{
    private final Runnable timeoutAction;
    private final AtomicLong lastPoke = new AtomicLong( System.currentTimeMillis() );
    private final long maxWaitTime;

    public WatchDogThread( Runnable timeoutAction, long maxWaitTime )
    {
        this.timeoutAction = timeoutAction;
        this.maxWaitTime = maxWaitTime;
    }

    public void poke()
    {
        lastPoke.set( System.currentTimeMillis() );
    }

    public void run()
    {
        while( Thread.interrupted() ) {
            if( lastPoke.get() + maxWaitTime < System.currentTimeMillis() ) {
                timeoutAction.run();
                break;
            }
            try {
                Thread.sleep( 1000 );
            } catch( InterruptedException e ) {
                break;
            }
        }
    }
}

public class Example
{
    public void method() throws IOException
    {
        final InputStream is = null;
        WatchDogThread watchDog =
            new WatchDogThread(
                new Runnable()
                {
                    @Override
                    public void run()
                    {
                        try {
                            is.close();
                        } catch( IOException e ) {
                            System.err.println( "Failed to close: " + e.getMessage() );
                        }
                    }
                },
                10000
            );
        watchDog.start();
        try {
            is.read();
            watchDog.poke();
        } finally {
            watchDog.interrupt();
        }
    }
}

EDIT:

As noticed, sockets have a timeout already. This would be preferred over doing a watchdog thread.

Ravi Wallau
  • 10,416
  • 2
  • 25
  • 34
  • 1
    This is exactly what you already have with Socket timeouts. If the data isn't sent fast enough and a timeout is reached the read() will throw an IOException and you can safely inform the client it ain't happening today. Without the extra overhead of creating more threads, that will all be just sitting around waiting for timeouts. Plus this is a backdoor to subvert the whole point of a thread pool, and could lead to 100's of threads being created because the external server stopped responding. – chubbsondubs Dec 22 '11 at 16:21
  • Ok, that's fair. I will edit my response. I didn't do my research before posting the response. – Ravi Wallau Dec 23 '11 at 15:55
-1

This isn't a big deal. All you need to do is set a timeout on your connection.

URL url = ...; 
URLConnection conn = URL.openConnection(); 
conn.setConnectTimeout( 30000 );
conn.setReadTimeout(15000); 
InputStream is = conn.openStream();

Eventually, one of the following things will happen. Your network will come back, and your transfers will resume, the TCP stack will eventually timeout in which case an exception IS thrown, or the socket will get a socket closed/reset exception and you'll get an IOException. In all cases the thread will let go of the read() call, and your thread will return to the pool ready to service other requests without you having to do anything extra.

For example, if your network goes out you won't be getting any new connections coming in, so the fact that this thread is tied up isn't going to make any difference because you don't have connections coming in. So your network going out isn't the problem.

More likely scenario is the server you are talking to could get jammed up and stop sending you data which would slow down your clients as well. This is where tuning your timeouts is important over writing more code, using NIO, or separate threads, etc. Separate threads will just increase your machine's load, and in the end force you to abandon the thread after a timeout which is exactly what TCP already gives you. You also could tear your server up because you are creating a new thread for every request, and if you start abandoning threads you could easily wind up with 100's of threads all sitting around waiting for a timeout on there socket.

If you have a high volume of traffic on your server going through this method, and any hold up in response time from a dependency, like an external server, is going to affect your response time. So you will have to figure out how long you are willing to wait before you just error out and tell the client to try again because the server you're reading this file from isn't giving it up fast enough.

Other ideas are caching the file locally, trying to limit your network trips, etc to limit your exposure to an unresponsive peer. The exact same thing can happen with databases on external servers. If your DB doesn't send you a responses fast enough it can jam up your thread pool just like a file that doesn't come down quick enough. So why worry any differently about file servers? More error handling isn't going fix your problem, and it will just make your code obtuse.

chubbsondubs
  • 37,646
  • 24
  • 106
  • 138
  • By default there are no read timeouts in TCP. You have to set one explicitly. You can only rely on TCP timers expiring if you are *sending*. – user207421 Nov 17 '16 at 08:09
  • Ok added some clarification to reflect you need to set the timeouts. I thought URL would set them to a default, but I guess not. – chubbsondubs Nov 18 '16 at 19:22