I'm trying to save data descriptions from the open data site with a Linux server via SSH:
https://dandelion.eu/datagems/SpazioDati/milano-grid/resource/
https://dandelion.eu/datagems/SpazioDati/milano-grid/description/
I've read the question (What's the best way to save a complete webpage on a linux server?) and tried wget -l 1 https://dandelion.eu/datagems/SpazioDati/milano-grid/description/
and wget -m https://dandelion.eu/datagems/SpazioDati/milano-grid/description/
. Neither of them worked. All I can get is an index.html. I want to have the files that I can get using IE/Firefox's 'Save Page' function 'Web page, complete' on a PC. There should be an html file and a folder containing all the stuff like images and such.
Is this possible on a Linux server via SSH? Thanks!
Update
This is what I want (I 'Save page' https://dandelion.eu/datagems/SpazioDati/milano-grid/description/ with Firefox):
|---Milano Grid description _ dandelion_files
| |---a
| |---css_all.css
| |---css.css
| |---dc.js
| |---fbk.jpg
| |---jquery_002.js
| |---jquery.js
| |---js_all.js
| |---js_sitebase.js
| |---Milano_GRID_4326.png
| |---milano-grid-img2.jpg
| |---mixpanel-2.js
| |---odi.png
| |---sbi.css
| |---spaziodati_black.png
| |---spaziodati_white.png
| |---telecom.png
|---Milano Grid description _ dandelion.htm
This is what I get with wget -p -k https://dandelion.eu/datagems/SpazioDati/milano-grid/description/
:
|---dandelion.eu
| |---datagems
| | |---SpazioDati
| | | |---milano-grid
| | | | |---resource
| | | | | |---index.html
| |---jsi18n
| | |---index.html
| |---robots.txt