5

I am using the new HttpClient shipped with JDK 11 to make many requests (to Github's API, but I think that's irrelevant), especially GETs.

For each request, I build and use an HttpClient, like this:

final ExecutorService executor = Executors.newSingleThreadExecutor();
final HttpClient client = client = HttpClient
    .newBuilder()
    .followRedirects(HttpClient.Redirect.NORMAL)
    .connectTimeout(Duration.ofSeconds(10))
    .executor(executor)
    .build();
try {
   //send request and return parsed response;
} finally {
   //manually close the specified executor because HttpClient doesn't implement Closeable,
   //so I'm not sure when it will release resources.
   executor.shutdownNow();
}

This seems to work fine, except every now and then, I get the bellow exception and requests will not work anymore until I restart the app:

Caused by: java.net.ConnectException: Cannot assign requested address
...
Caused by: java.net.BindException: Cannot assign requested address
    at java.base/sun.nio.ch.Net.connect0(Native Method) ~[na:na]
    at java.base/sun.nio.ch.Net.connect(Net.java:476) ~[na:na]
    at java.base/sun.nio.ch.Net.connect(Net.java:468) ~[na:na]

Note that this is NOT the JVM_Bind case.

I am not calling localhost or listening on a localhost port. I am making GET requests to an external API. However, I've also checked the etc/hosts file and it seems fine, 127.0.0.1 is mapped to localhost.

Does anyone know why this happens and how could I fix it? Any help would be greatly appreciated.

Kirill
  • 7,580
  • 6
  • 44
  • 95
amihaiemil
  • 623
  • 8
  • 19
  • Can't it be that you bumped into ulimit of some sort? – skapral Oct 11 '21 at 12:31
  • It's definitely not Github's Rate Limiting - we would have to get a response from them anyway, and the problem wouldn't be fixed with an app restart. – amihaiemil Oct 11 '21 at 12:33
  • No, I meant - some local limit on opened connections in your host, or something like that. Can't it be that each time you instantiate new HttpClient, it holds some ulimit-controlled resources, and you don't properly release them, exceeding the limit eventually? – skapral Oct 11 '21 at 12:45
  • 1
    Not sure if it's a main problem here, but you are not waiting all tasks to complete. Check `executor.shutdownNow();` documentation it's returning all pending `Runnable`s from executor and **not awaiting** current tasks for completion. Use `executor.awaitTermination()` after `shutdownNow` to wait for it. Also, I don't really understand why you don't share the same client for different requests. – Kirill Oct 11 '21 at 12:46
  • Possibly caused by https://bugs.openjdk.java.net/browse/JDK-8221395. I get the same problem after the JDK 11 HTTPClient has been running a long time due to too many sockets in CLOSE_WAIT. See also https://stackoverflow.com/questions/55271192/connections-leaking-with-state-close-wait-with-httpclient – lreeder Nov 13 '21 at 14:52

1 Answers1

1

You can try using one shared HttpClient for all requests, since it manages connection pool internally and may keep connections alive for same host (if supported). Performing a lot of requests on different HttpClients is not effective, because you'll have n thread pools and n connection pools, where n is an amount of clients. And they won't share underlying connections to the host.

Usually, an application creates a single instance of HttpClient in some kind of main() and provides it as a dependency to users.

E.g.:

public static void main(String... args) {
  final HttpClient client = client = HttpClient
    .newBuilder()
    .followRedirects(HttpClient.Redirect.NORMAL)
    .connectTimeout(Duration.ofSeconds(10))
    .build();
  new GithubWorker(client).start();
}

Update: how to stop current client

According to JavaDocs in internal private class of JDK in HttpClientImpl.stop method:

    // Called from the SelectorManager thread, just before exiting.
    // Clears the HTTP/1.1 and HTTP/2 cache, ensuring that the connections
    // that may be still lingering there are properly closed (and their
    // possibly still opened SocketChannel released).
    private void stop() {
        // Clears HTTP/1.1 cache and close its connections
        connections.stop();
        // Clears HTTP/2 cache and close its connections.
        client2.stop();
        // shutdown the executor if needed
        if (isDefaultExecutor) delegatingExecutor.shutdown();
    }

This method is called from SelectorManager.showtdown (SelectorManager is created in HttpClient's constructor), where shutdown() method called in finally block around busy loop in SelectorManager.run() (yes, it implements Thread). This busy loop is while (!Thread.currentThread().isInterrupted()). So to enter this finally block you need to either fail this loop with exception or interrupt the running thread.

Kirill
  • 7,580
  • 6
  • 44
  • 95
  • Thanks. I'll try your approach as well. However, I still don't understand why resources are not released in my case as well. I understand it's less efficient to have multiple HttpClient instances, but resources should be cleared in my scenario as well :D – amihaiemil Oct 11 '21 at 13:19
  • @amihaiemil sure, I explained this behavior in update – Kirill Oct 11 '21 at 14:01
  • Resources will be released when: 1. all operations have terminated, and 2. there's no strong references to the HttpClient. This is depending on GC clearing up weak references which could take some time after all strong references have been released. Note that if you provide an executor to the HttpClient, it is still *your* responsibility to shut down that executor, if needed. – daniel Oct 11 '21 at 15:58
  • 1
    @daniel I saw this discussion about releasing resources when the GC comes and does its job. But I guess it's a mistake in design? I would actually expect HttpClient to be Closeable or Autocloseable and release everything on close(), right? :( – amihaiemil Oct 11 '21 at 17:52
  • It's not a mistake - but a conscious design choice. There are many pitfalls when implementing a close() or shutdown() method, especially when some other thread might still be using the client and sending out new requests. You have to specify what is supposed to happen if any operation is still outstanding, which exception they will get, make sure that the implementation follows the spec... I am not saying that HttpClient shouldn't ever implement Autocloseable, just that it's far from trivial to specify and implement correctly. It was left off as a possible future enhancement. – daniel Oct 14 '21 at 13:50
  • See https://bugs.openjdk.java.net/browse/JDK-8267140 – daniel Oct 14 '21 at 13:55
  • 1
    I switched to a global HttpClient (also didn't need to specify my own executor anymore) and the problem did not reproduce for quite a long time now, so I think I can safely assume it's fixed. Therefore this is the accepted answer. Many thanks, everybody! – amihaiemil Dec 09 '21 at 07:25