4

I am trying to parse the source of a downloaded web-page in order to obtain the link listing. A one-liner would work fine. Here's what I've tried thus far:

This seems to leave out parts of the URL from some of the page names.

$ cat file.html | grep -o -E '\b(([\w-]+://?|domain[.]org)[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))'|sort -ut/ -k3

This gets all of the URL's but I do not want to include links that have/are anchor links. Also I want to be able to specify the domain.org/folder/:

$ awk 'BEGIN{
RS="</a>"
IGNORECASE=1
}
{
  for(o=1;o<=NF;o++){
    if ( $o ~ /href/){
      gsub(/.*href=\042/,"",$o)
      gsub(/\042.*/,"",$o)
      print $(o)
    }
  }
}' file.html
Astron
  • 1,211
  • 5
  • 20
  • 42
  • 1
    http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags – William Pursell Mar 20 '11 at 15:14
  • `grep -E` doesn't understand non-capturing sub-patterns or `\w` escapes inside character classes. You need to use `grep -P`. – Dennis Williamson Mar 20 '11 at 16:17
  • @Dennis Williamson: now that returns similar results to the second example but I need to be able to weed out the anchor links and specify a http://domain.com/folder – Astron Mar 20 '11 at 16:21

2 Answers2

8

If you are only parsing something like < a > tags, you could just match the href attribute like this:

$ cat file.html | grep -o -E 'href="([^"#]+)"' | cut -d'"' -f2 | sort | uniq

That will ignore the anchor and also guarantee that you have uniques. This does assume that the page has well-formed (X)HTML, but you could pass it through Tidy first.

mjbommar
  • 489
  • 3
  • 8
  • that works similar to the second example that I posted but I'm looking for a way to trim the results that are anchor links. http://domain.com/folder/link.html http://domain.com/folder/link.html#anchor **not desirable** – Astron Mar 20 '11 at 15:45
  • great, but now it seems to be including other links (maybe I just didn't notice them before. Can the grep statement also specify the domain.com/folder/? Thanks – Astron Mar 20 '11 at 16:31
  • @Astron, sure add ` | grep 'domain.cold/folder/'` at the end of the line. – mjbommar Mar 20 '11 at 16:35
2
lynx -dump http://www.ibm.com

And look for the string 'References' in the output. Post-process with sed if you need to.

Using a different tool sometimes makes the job simpler. Once in a while, a different tool makes the job dead simple. This is one of those times.

Mike Sherrill 'Cat Recall'
  • 91,602
  • 17
  • 122
  • 185