There is a website which contains information we have paid for access to, however the only way available to access the information is through the website and there are 1400 records. So, since there is so much of it, we want to have the information in an excel spreadsheet which is manageable. However, the organization in charge of the website isn't willing to help.
I can write a python script that can parse the html and extract the relevant data, however, the problem is that the site is not easily crawlable because it is an ASP site and many of the "links" are in fact triggers to javascript which loads the destination page. This means that a tool like HTTrack doesn't really work.
Are there any other tools or python modules which can help me do this (bearing in mind the "javascript" links)? I'm totally new to this kind of thing, so I just have no experience of what kinds of things are available to me.