2

We have a petition and each petition signature goes on a google map as a flag.

The problem is that we now have 3000+ signatures (and rising by hundreds each day) and so the html is becoming huge.

So I'm going to change it to an AJAX script. Would it be better to ajax in all 3000+ signatures at once in one AJAX query or would it be better to run 3000 separate queries?

I have a feeling that the answer could be the second one, but however what impact will it have running 3000+ (almost) concurrent ajax queries on the server?

Thanks guys.

Thomas Clayson
  • 29,657
  • 26
  • 147
  • 224
  • 14
    Definitely NOT 3000+ HTTP requests! :) – Lee Gunn Nov 03 '11 at 11:35
  • 1
    Why not 3 AJAX queries downloading a third of the data each? – Widor Nov 03 '11 at 11:36
  • 3
    I think download 500k at once will be a lot faster, once you will avoid the overhead of communication of each XHR call. – Yuri Ghensev Nov 03 '11 at 11:36
  • 1
    its ok to split very large file in chunks. But not in 3000. – c69 Nov 03 '11 at 11:37
  • 2
    And don't make 3000 flags; [use marker clustering](http://i.imgur.com/bGL0o.jpg). – hyperslug Nov 03 '11 at 11:39
  • Test various combinations and time it. You find out the sweet spot when the connection set up and tear down outweigh the transfer time. My opinion: just load it all at once. It's an easier load on the server, and makes your code less complex. When the download begins to take too long (user experience suffers), you can explore loading the data in chunks. Definitely do not open more than 3 or 5 connections to your server at once, unless you've got the server configured to handle that. – Jonathan Julian Nov 03 '11 at 11:41
  • hyperslug thats a good idea - do you have any examples or tutorials going straight from mysql? – Thomas Clayson Nov 03 '11 at 11:48
  • oh good question. second is good but more complex to handle. if you think that it can be handle then go for second else first will easy. – Hemant Menaria Nov 03 '11 at 11:41
  • @ThomasClayson, the clustering is handled by the Maps API so you'll still need all 3000 data points, but it's a good automatic UI implementation. Sample code: http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/docs/examples.html, and overview: http://code.google.com/apis/maps/articles/toomanymarkers.html – hyperslug Nov 03 '11 at 12:05

5 Answers5

2

Each separate query has an overhead because of the HTTP request, HTTP headers and the like. Not to mention the lag. So in efficiency terms, downloading them all at once is by far the best. But efficiency doesn't always create great user experience. You don't want the user to have to wait an age just to see the signatures pop up, so try to find a balance.

I would start by downloading them in groups of 500, and try it out on a few different internet connections, then tweak the number as appropriate. You could even play around with the order they are downloaded in, depending on how much development effort you're willing to put in on the server side. For example, starting closest to the current user might be the most useful.

Nathan MacInnes
  • 11,033
  • 4
  • 35
  • 50
  • go one step further: measure the time it takes to get the #1 query. If it's very short, then double the size and on for each subsequent query: if it's too long, then half the size. – Olivier Pons Nov 03 '11 at 15:28
1

Why does it have to be one or the other and not somewhere in the middle - they're both at the more extreme end of AJAX usage!

Don't make 3,000 separate calls to get one dataset; that's madness.
And 1 call getting 500kb+ is still a little on the crazy side.

Why not just a couple of calls to get, say, a few hundred records at a time?

Widor
  • 13,003
  • 7
  • 42
  • 64
1

I think you can still use one HTTP request but send the data down in chunks by flushing the response buffer.

Check out this post for an idea of how to do it:

How to load an ajax (jquery) request response progressively without waiting for it to finish?

Community
  • 1
  • 1
Lee Gunn
  • 8,417
  • 4
  • 38
  • 33
  • This looks like a good idea. You could make the chunks smaller than using separate requests because there'd be much less overhead. – Nathan MacInnes Nov 04 '11 at 10:17
0

Your currently choosing between 2 extremists. What about 300 or an other x amount? You could do measurements what the optimal batch size should be. This is also related to the clients connection and the server capabilities.

Imho 3000 queries doesn't really sounds like a good solution, but I don't have anything to back that up.

RvdK
  • 19,580
  • 4
  • 64
  • 107
0

Every request hat some overhead, so making more requests will require more data to be transmitted. Additionally each request contains some latency while the client waits for the start of the sever response to arrive. So making more requests will take longer. Especially, making 5000 requests will take very much longer than a single one.

Nevertheless, pulling the data in several requests from the server, like in batches of 500, has an advantage: You can already display the first 500 signatures while the rest of them are still loading. So requesting the data in reasonable batches might be the way to go, depending on what you do do with the received data.

sth
  • 222,467
  • 53
  • 283
  • 367