6

I need to scrape a web page using Java and I've read that regex is a pretty inefficient way of doing it and one should put it into a DOM Document to navigate it.

I've tried reading the documentation but it seems too extensive and I don't know where to begin.

Could you show me how to scrape this table in to an array? I can try figuring out my way from there. A snippet/example would do just fine too.

Thanks.

João Silva
  • 89,303
  • 29
  • 152
  • 158
Mridang Agarwalla
  • 43,201
  • 71
  • 221
  • 382

4 Answers4

7

You can try jsoup: Java HTML Parser. It is an excellent library with good sample codes.

dsr
  • 1,306
  • 1
  • 13
  • 21
  • I had a look at the jSoup docs and it looks pretty darn good. I was looking for something on the lines of BeautifulSoup for Python and here it is! – Mridang Agarwalla Jan 02 '11 at 06:09
4
  1. Transform the web page you are trying to scrap into an XHTML document. There are several options to do this with Java, such as JTidy and HTMLCleaner. These tools will also automatically fix malformed HTML (e.g., close unclosed tags). Both work very well, but I prefer JTidy because it integrates better with Java's DOM API;
  2. Extract required information using XPath expressions.

Here is a working example using JTidy and the Web Page you provided, used to extract all file names from the table.

public static void main(String[] args) throws Exception {
    // Create a new JTidy instance and set options
    Tidy tidy = new Tidy();
    tidy.setXHTML(true); 

    // Parse an HTML page into a DOM document
    URL url = new URL("http://www.cs.grinnell.edu/~walker/fluency-book/labs/sample-table.html");        
    Document doc = tidy.parseDOM(url.openStream(), System.out);

    // Use XPath to obtain whatever you want from the (X)HTML
    XPath xpath = XPathFactory.newInstance().newXPath();
    XPathExpression expr = xpath.compile("//td[@valign = 'top']/a/text()");
    NodeList nodes = (NodeList)expr.evaluate(doc, XPathConstants.NODESET);
    List<String> filenames = new ArrayList<String>();
    for (int i = 0; i < nodes.getLength(); i++) {
        filenames.add(nodes.item(i).getNodeValue()); 
    }

    System.out.println(filenames);
}

The result will be [Integer Processing:, Image Processing:, A Photo Album:, Run-time Experiments:, More Run-time Experiments:] as expected.

Another cool tool that you can use is Web Harvest. It basically does everything I did above but using an XML file to configure the extraction pipeline.

João Silva
  • 89,303
  • 29
  • 152
  • 158
  • This is an elegant solution but overkill for some simple scraping. Building a dom of a large web page will be quite slow (the original example was a small page, but in general most web pages these days have complicated DOMs). – sksamuel Jan 02 '11 at 03:44
0

Regex is definitely the way to go. Building a DOM is overly complicated and itself requires a lot of text parsing.

sksamuel
  • 16,154
  • 8
  • 60
  • 108
0

If all you are doing is scraping a table into a datafile, regex will be just fine, and may be even better than using a DOM document. DOM documents will use up a lot of memory (especially for really large data tables) so you probably want a SAX parser for large documents.

Zeki
  • 5,107
  • 1
  • 20
  • 27