I'm writing a client that loads & parses many pages at once & sends data from them to a server. If I just run one page-processor at once, things go reasonably well:
********** Round-trip (with 0 sends/0 loads) for (+0/.0/-0) was total 1.98s (1.60s load html, 0.24s parse, 0.00s on queue, 0.14s to process) **********
********** Round-trip (with 0 sends/0 loads) for (+0/.0/-0) was total 1.87s (1.59s load html, 0.25s parse, 0.00s on queue, 0.03s to process) **********
********** Round-trip (with 0 sends/0 loads) for (+0/.0/-0) was total 2.79s (1.78s load html, 0.28s parse, 0.00s on queue, 0.72s to process) **********
********** Round-trip (with 0 sends/1 loads) for (+0/.0/-0) was total 2.18s (1.70s load html, 0.34s parse, 0.00s on queue, 0.15s to process) **********
********** Round-trip (with 0 sends/1 loads) for (+0/.0/-0) was total 1.91s (1.47s load html, 0.21s parse, 0.00s on queue, 0.23s to process) **********
********** Round-trip (with 0 sends/1 loads) for (+0/.0/-0) was total 1.84s (1.59s load html, 0.22s parse, 0.00s on queue, 0.03s to process) **********
********** Round-trip (with 0 sends/0 loads) for (+0/.0/-0) was total 1.90s (1.67s load html, 0.21s parse, 0.00s on queue, 0.02s to process) **********
However, with ~20 running at once (each in its own thread), the HTTP traffic gets remarkably slow:
********** Round-trip (with 2 sends/7 loads) for (+0/.0/-0) was total 23.37s (16.39s load html, 0.30s parse, 0.00s on queue, 6.67s to process) **********
********** Round-trip (with 2 sends/5 loads) for (+0/.0/-0) was total 20.99s (14.00s load html, 1.99s parse, 0.00s on queue, 5.00s to process) **********
********** Round-trip (with 4 sends/4 loads) for (+0/.0/-0) was total 17.89s (9.17s load html, 0.30s parse, 0.12s on queue, 8.31s to process) **********
********** Round-trip (with 3 sends/5 loads) for (+0/.0/-0) was total 26.22s (15.34s load html, 1.63s parse, 0.01s on queue, 9.24s to process) **********
The load html
bit is the time it takes to read the HTML of the webpage I'm processing (resp = self.mech.open(url)
to resp.read(); resp.close()
). The to process
bit is the time it takes to make a round-trip from this client to the server that processes it (fp = urllib2.urlopen(...); fp.read(); fp.close()
). The X sends/Y loads
bit is the number of simultaneous sends to the server and loads from the webpages I'm processing that were being run when the request to the server was made.
I'm most concerned about the to process
bit. The actual processing on the server only takes 0.2s
or so. Only 400 bytes are being sent, so it's not a matter of using up too much bandwidth. The interesting thing is, if I run a program (while the parsing is going with all this simultaneous sending/loading) that opens 5 threads and repeatedly does just the to process
bit, it goes remarkably fast:
1 took 0.04s
1 took 1.41s in total
0 took 0.03s
0 took 1.43s in total
4 took 0.33s
2 took 0.49s
2 took 0.08s
2 took 0.01s
2 took 1.74s in total
3 took 0.62s
4 took 0.40s
3 took 0.31s
4 took 0.33s
3 took 0.05s
3 took 2.18s in total
4 took 0.07s
4 took 2.22s in total
Each to process
in this stand-alone program only takes 0.01s
to 0.50s
, far less than the 6-10 seconds in the full-blown version, and it isn't using any less sending threads (it uses 5, and the full-blown version is capped at 5).
That is, while the full-blown version is running, running a separate version sending those very same (+0/.0/-0)
requests of 400 bytes each, takes only 0.31
s for each request. So, it's not like the machine I'm running on is tapped... it seems rather that the multiple simultaneous loads in other threads are slowing down the what-should-be fast (and actually are fast, in the other program running on the same machine) sends in other threads.
The sending is done with urllib2.urlopen
, while the reading is being done with mechanize (which eventually uses a fork of urllib2.urlopen
).
Is there a way to make the full-blown program run as quickly as this mini-stand-alone version, at least for when they are sending the same thing? I'm thinking to write another program that just takes in what to send over a named pipe, or something, so that the sends are done in another process, but that seems silly, somehow. Any suggestions would be welcome.
Any suggestions on how to get those multiple simultaneous page loads faster (so the times look more like 1-3 seconds instead of 10-20 seconds) would also be welcome.
EDIT: Additional note: I rely on the cookie-handling functionality of mechanize, so any answer would ideally provide a way to deal with that, as well...
EDIT: I have the same set up with a different config, where just one page is opened and ~10-20 things are added to the queue at once. Those get processed like a knife through butter, e.g. here is the tail-end of having added a whole bunch:
********** Round-trip (with 4 sends/0 loads) for (+0/.0/-0) was total 1.17s (1.14s wait, 0.04s to process) **********
********** Round-trip (with 4 sends/0 loads) for (+0/.0/-0) was total 1.19s (1.16s wait, 0.03s to process) **********
********** Round-trip (with 4 sends/0 loads) for (+0/.0/-0) was total 1.26s (0.80s wait, 0.46s to process) **********
********** Round-trip (with 4 sends/0 loads) for (+0/.0/-0) was total 1.35s (0.77s wait, 0.58s to process) **********
********** Round-trip (with 4 sends/0 loads) for (+2/.4/-0) was total 1.44s (0.24s wait, 1.20s to process) **********
(I added the wait
timing which is how long the info sat on the queue before it was sent.) Note that the to process
is as fast as the stand-alone program, was. The problem only manifests on the one that is constantly reading&parsing webpages. (Note that the parsing itself takes a lot of CPU).
EDIT: Some preliminary testing indicates I should just use a separate process for each webpage load... will post an update once that is up and running.