I'm doing some SEO work for a friend and before I begin, want to do a proper inventory of his site. However, much of his site is dynamic - the content is drawn from data initiated by URL parameters, and the html doesn't render the data until the page has loaded. This means that conventional scrapers, such as Httrack, don't pick up on the bulk of the page content.
There are dozens - possibly even hundreds - of pages (due to the number of permutations from the URL parameters).
Can anyone think of a tool that I can use to go through each of these and get the actual html of a rendered page