I'm trying to crawl a local site with wget -r but I'm unsuccessful: it just downloads the first page and doesn't go any deeper. By the way, I'm so unsuccessful that for whatever site I'm trying it doesn't work... :)
I've tried various options but nothing better happens. Here's the command I thought I'd make it with:
wget -r -e robots=off --user-agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4" --follow-tags=a,ref --debug `http://rocky:8081/obix`
Really, I've no clue. Whatever site or documentation I read about wget tells me that it should simply work with wget -r so I'm starting to think my wget is buggy (I'm on Fedora 16).
Any idea?
EDIT: Here's the output I'm getting for wget -r --follow-tags=ref,a http://rocky:8081/obix/
:
wget -r --follow-tags=ref,a
http://rocky:8081/obix/
--2012-10-19 09:29:51--http://rocky:8081/obix/
Resolving rocky... 127.0.0.1 Connecting to rocky|127.0.0.1|:8081... connected. HTTP request sent, awaiting response... 200 OK Length: 792 [text/xml] Saving to: “rocky:8081/obix/index.html”100%[==============================================================================>] 792 --.-K/s in 0s
2012-10-19 09:29:51 (86,0 MB/s) - “rocky:8081/obix/index.html” saved [792/792]
FINISHED --2012-10-19 09:29:51-- Downloaded: 1 files, 792 in 0s (86,0 MB/s)