4

In a crawler-like project we have a common and widely used task to resolve/expand thousands of URLs. Say we have (very simplified example):

http://bit.ly/4Agih5

GET 'http://bit.ly/4Agih5' request returns one of the 3xx, we follow redirect right to the:

http://stackoverflow.com

GET 'http://stackoverflow.com' returns 200. So 'stackoverflow.com' is the result we need.

Any URLs (not only well-known shorteners like bit.ly) are allowed as input. Some of them redirect once, some doesn't redirect at all (result is the URL itself in this case), some redirect multiple times. Our task to follow all redirects imitating browser behavior as much as possible. In general, if we have some URL A resolver should return us URL B which should be the same as if A was opened in some browser.

So far we used Java, pool of threads and simple URLConnection to solve this task. Advantages are obvious:

  • simplicity - just create URLConnection, set follow redirects and that's it (almost);
  • well HTTP support - Java provides everything we need to imitate browser as much as possible: auto follow redirects, cookies support.

Unfortunately such approach has also drawbacks:

  • performance - threads are not for free, URLConnection starts downloading document right after getInputStream(), even if we don't need it;
  • memory footprint - don't sure exactly but seems that URL and URLConnection are quite heavy objects, and again buffering of the GET result right after getInputStream() call.

Are there other solutions (or improvements to this one) which may significantly increase speed and decrease memory consumption? Presumably, we need something like:

  • high-performance lightweight Java HTTP client based on java.nio;
  • C HTTP client which uses poll() or select();
  • some ready library which resolves/expands URLs;
Shcheklein
  • 5,979
  • 7
  • 44
  • 53
  • Have you tried Apache Nutch crawler? – Senthil Apr 12 '11 at 23:36
  • Some sites redirect using meta tags or Javascript, so most likely you want to use a browser to get a definitive answer. – Abdullah Jibaly Apr 12 '11 at 23:51
  • @Abdullah Jibaly yes, I know. Most important of them we process in a site-specific way to get the final destination. As I said behavior should be as close as possible to the browser, not exactly the same. Considering that we need to process thousands of URLs I believe it's not our way to handle JS. – Shcheklein Apr 15 '11 at 17:01

2 Answers2

1

I'd use a selenium script to read URLs off of a queue and GET them. Then wait about 5 seconds per browser to see if a redirect occurs and if so put the new redirect URL back into the queue for the next instance to process. You can have as many instances running simultaneously as you want.

UPDATE:

If you only care about the Location header (what most non-JS or meta redirects use), simply check it, you never need to get the inputStream:

HttpURLConnection.setFollowRedirects(false);
URL url = new URL("http://bit.ly/abc123");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
String newLocation = conn.getHeaderField("Location");

If the newLocation is populated then stick that URL back into the queue and have that followed next round.

Abdullah Jibaly
  • 53,220
  • 42
  • 124
  • 197
  • How it can be faster and consume less memory than our current solution? Am I right that I need to start a lot of "browsers" to parallelize that? – Shcheklein Apr 15 '11 at 17:08
1

You can use Python, Gevent, and urlopen. Combine this gevent exampel with the redirect handling in this SO question.

I would not recommend Nutch, it is very complex to set up and has numerous dependencies (Hadoop, HDFS).

Community
  • 1
  • 1
Spike Gronim
  • 6,154
  • 22
  • 21