I have a .html file full of links, I would like to extract the domains without the http:// (so just the hostname portion of the link, e.g blah.com) list them and remove duplicates.
This is what I have come up with so far - i think the issue is the way I am trying to pass the $tree data
#!/usr/local/bin/perl -w
use HTML::TreeBuilder 5 -weak; # Ensure weak references in use
use URI;
foreach my $file_name (@ARGV) {
my $tree = HTML::TreeBuilder->new; # empty tree
$tree->parse_file($file_name);
my $u1 = URI->new($tree);
print "host: ", $u1->host, "\n";
print "Hey, here's a dump of the parse tree of $file_name:\n";
# Now that we're done with it, we must destroy it.
# $tree = $tree->delete; # Not required with weak references
}