My idea would be to curl download the link to website, so you get the html version of it, than take a look on this topic. Having this, you can extract all needed tags, for example "img" tags and their href.
Than just load them into array, and iterate curl to download them and store them locally.
Another approach would be to download html and load all links based on filter (ex. beggining on "\"http://" and ending with quote (also make another filter for single quote if there are singe quotes in the html).
Than just iterate all of links and whitelist them based on extension, if that's file what you are intrested in. Than curl download and store them.
EDIT:
I forgot - also do not forget to fix links in the .html and .css and .js (and probably more) files. Also just offtopic sidenote, watch out for images with php in them.