1

Using Python, I built a scraper for an ASP.NET site (specifically a Jenzabar course searching portlet) that would create a new session, load the first search page, then simulate a search by posting back the required fields. However, something changed, and I can't figure out what, and now I get HTTP 500 responses to everything. There are no new fields in the browser's POST data that I can see.

I would ideally like to figure out how to fix my own scraper, but that is probably difficult to ask about on StackOverflow without including a ton of specific context, so I was wondering if there was a way to treat the page as a black box and just fire click events on the postback links I want, then get the HTML of the result.

I saw some answers on here about scraping with JavaScript, but they mostly seem to focus on waiting for javascript to load and then returning a normalized representation of the page. I want to simulate the browser actually clicking on the links and following the same path to execute the request.

josePhoenix
  • 538
  • 1
  • 5
  • 14

3 Answers3

2

Without knowing any specifics, my hunch is that you are using a hardcoded session id and the web server's app domain recycled and created new encryption/decryption keys, rendering your hardcoded session id (which was encrypted by the old keys) useless.

JeremyWeir
  • 24,118
  • 10
  • 92
  • 107
  • I used to initiate the session in my browser and copy the cookies, but I have tried logging out and creating a new session without success. Also, when I do not use my own user account, and instead create an anonymous session for each request, it still doesn't work. – josePhoenix Apr 03 '11 at 21:20
  • 1
    I must have changed two things at once. After pretty much rewriting the scraper (which I needed to do anyway) so that it would do its own sessions in a less-hacky way, it still died partway through. I checked the logs, and it was 3AM exactly... and it turns out that's when it was last time. So I guess it was the session key expiring. Thanks! – josePhoenix Apr 04 '11 at 10:18
0

You could try using Firebugs NET tab to monitor all requests, browse around manually and then diff the requests that you generate with ones that your screen scraper is generating.

Porco
  • 4,163
  • 3
  • 22
  • 25
0

If you are just trying to simulate load, you might want to check out something like selenium, which runs through a browser and handles postbacks like a browser does.

Wyatt Barnett
  • 15,573
  • 3
  • 34
  • 53