In a crawler-like project we have a common and widely used task to resolve/expand thousands of URLs. Say we have (very simplified example):
GET 'http://bit.ly/4Agih5' request returns one of the 3xx, we follow redirect right to the:
GET 'http://stackoverflow.com' returns 200. So 'stackoverflow.com' is the result we need.
Any URLs (not only well-known shorteners like bit.ly) are allowed as input. Some of them redirect once, some doesn't redirect at all (result is the URL itself in this case), some redirect multiple times. Our task to follow all redirects imitating browser behavior as much as possible. In general, if we have some URL A
resolver should return us URL B
which should be the same as if A
was opened in some browser.
So far we used Java, pool of threads and simple URLConnection
to solve this task. Advantages are obvious:
- simplicity - just create
URLConnection
, set follow redirects and that's it (almost); - well HTTP support - Java provides everything we need to imitate browser as much as possible: auto follow redirects, cookies support.
Unfortunately such approach has also drawbacks:
- performance - threads are not for free,
URLConnection
starts downloading document right aftergetInputStream()
, even if we don't need it; - memory footprint - don't sure exactly but seems that
URL
andURLConnection
are quite heavy objects, and again buffering of the GET result right aftergetInputStream()
call.
Are there other solutions (or improvements to this one) which may significantly increase speed and decrease memory consumption? Presumably, we need something like:
- high-performance lightweight Java HTTP client based on java.nio;
- C HTTP client which uses poll() or select();
- some ready library which resolves/expands URLs;