1

I am using the RESTFul HttpGet service from

import org.apache.http.client.methods.HttpGet;

I created a function to query a service on a different computer. The call works fine, however I run into a "too many open files" on the service. My side simply returns a 500 error, which I catch.

I spoke with the vendor, and they were quite adamant that the RESTful call should be made in a persistent way and that I am simply not freeing something and continued on that the problem is on my side.

I wrote a stress function, see below, to help isolate the problem. Too my knowledge, I am releasing everything. I am still new to Java so many I am just not seeing something that was not clear.

import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;

public void doStressTest()
{
    String strUri = "http://<ip address>:<port>/task?vara=dataa&varb=datab";
    HttpGet oRestyGet = new HttpGet();
    oRestyGet.addHeader("Accept", "application/xml");

    for (int iLoop = 0; iLoop < 1000; iLoop++)
    {
        CloseableHttpClient httpclient = HttpClients.createDefault();
        try
        {
            oRestyGet.setURI(new URI(strUri));
            try
            {
                CloseableHttpResponse response2 = httpclient.execute(oRestyGet);
                try
                {
                    String strResponseHeader = response2.getStatusLine().toString();
                    if (false == "HTTP/1.1 200 OK".equalsIgnoreCase(strResponseHeader))
                        return;

                    continue;
                }
                finally
                {
                    response2.close();
                }
            }
            catch (ClientProtocolException e)
            {
                e.printStackTrace();
            }
        }
        catch (Exception e)
        {
            e.printStackTrace();
        }
        finally
        {
            try
            {
                httpclient.close();
            }
            catch (Exception e)
            {
                e.printStackTrace();
            }
        }
    }

    try
    {
        oRestyGet.releaseConnection();
    }
    catch (Exception e)
    {
        e.printStackTrace();
    }
}

UPDATE: I thought that I add the text of what the vendor had to say, just in case it might help. I should add that in the stress test below I am opening a single HTTP GET request object, which for my "session" is persistent and used for the entire request.

This means that the system is running out of FDs for the user running the process. This happens when there are too many open sockets or FDs. In general with HTTP it is highly recommended the HTTP requests are sent in a persistent way, that is you have one HTTP connection for the entire session and not opening multiple HTTP connections then closing them for each request. More often then not you end up with lingering connections.

I am closing the "closeable" CloseableHttpClient, and nor do I think that I am required to have only one. I should add that "I" am not running out of anything, the service is. The vendor seems to say that the service is perfect as is the OS. What more can be done to isolate the problem?

UPDATE 2: (I sent the log file from the service to the vendor, and they had the following to say. Stalemate?)

It looks like the problem is that there are too many connections left open by your interface. I included the log file from a session by Firefox. You can see how I can make multiple HTTP requests over the same connection using a browser.

There is only one line where the initial connection is made

HTTP connection from [127.0.0.1]...ALLOWED

the subsequent GET requests are from the same connection.

...your logfile shows: HTTP connection from [192.168.20.123]...ALLOWED

for each GET request you make, and those connections are not closed.

UPDATE 3: I have access to the log that the service generates, although no access to the source. Java issues the connection and the GET requests as one package in response to the line:

CloseableHttpResponse response2 = httpclient.execute(oRestyGet);

I am issuing the httpclient.close(), which generates no log entry, so I suspect that the service simply does not respond to that.

As I am unfamiliar with the mechanics of the other end, maybe the service simply responds to events, and the problem is with CENTOS not handling the close() call properly. Either way, using this method, the problem is not mine. Proving it is another story. The alternative is some sort of other solution that frees resources properly.

Sarah Weinberger
  • 15,041
  • 25
  • 83
  • 130

3 Answers3

1

If you're getting an HTTP 500 Internal Server Error, that has nothing to do with your client code, or how you manage HTTP connections. The problem is entirely on the vendor side.

Thorn G
  • 12,620
  • 2
  • 44
  • 56
  • I agree with you, however will hold off accepting that answer, that the problem is on the vendor side. I should be able to create/destroy the HTTP GET request object too, but oh well, might as well just have one. The stress test code above does look complete and should work. Whether or not there are alternate solutions is a different topic. I will attempt to capture a log and send to the vendor. – Sarah Weinberger Mar 19 '14 at 14:52
  • The problem comes way before the service/OS runs out of file descriptors (FDs). The problem is releasing the connection, once created, which does not happen. I checked the properties available to me and close() from httpclient is the only one, which I call. It is entirely possible that maybe httpclient has a problem. Much depends on the engineering of layers below my call. – Sarah Weinberger Mar 19 '14 at 17:44
0

I would use lsof to see what files are being held open. You could also look at netstat -n -a to see all of the connections that are open. Have you tried bumping up the open file limit to see if you can get around the problem that way?

Too many open files ( ulimit already changed )

Community
  • 1
  • 1
Zeki
  • 5,107
  • 1
  • 20
  • 27
  • I did not know about the lsof command. Thank you. The lsof command shows 1006 lines that look alike. Those are my stress test requests. The service bombed out at 1007, so that is right on mark. "netstat -n -a" did not show anything interesting that I saw. The total count of output for the entire command was 512, so that was okay. I assume lsof lists FD, which matches what the vendor says. What do I do with this information? – Sarah Weinberger Mar 19 '14 at 19:25
  • I did not bump up the open file limit, would have to research how, but that would not do any good, as the problem would just happen a bit later. One request equals one file handle left open until I close the service or restart the OS. – Sarah Weinberger Mar 19 '14 at 19:26
0

Tom G was right in his response that the problem was on the vendor side, but the reason that I created another answer is that although the vendor did not close a file handle that was opened, the stress test still crashed.

I had to switch to Jersey, which solved the "httpclient.execute()" performing a open/close operation with every GET request, and then the vendor had to give me a build that fixed the issue on their end, hence two issues.

The apache framework appears to have problems, at least with my implementation, while the Sun framework (Jersey) worked nicely in my stress test.

Now I need to test with the real code.

Sarah Weinberger
  • 15,041
  • 25
  • 83
  • 130