This kind of issue crops up everywhere in database SQL design - and although its not precisely the same - the issue is exactly the thing that'll get you: network communications.
You need to watch out for it because it'll come back to bite you if you don't do it correctly - ie users will get severely annoyed HAVING to wait. I can't emphasize this enough.
Here's the scenario:
You want to transfer many small snippets of information from your server upon request.
Each request depends on a number of factors operating efficiently. All of which are OUT OF YOUR CONTROL:
Wide area Network reposnse time (anywhere in the world right?)
Locak area Network reposnse time (anywhere in the building)
WebServer Load
WebServer Response time
Database response time
Backend Script run time
Javascript run time to process the result
The fact that the browsers are generally limited to 6-8 parallel AJAX requests at once (I think - someone correct me on the exact number)
Multiply that by request (erm...in your case x 100)
Get the picture?
It might work blissfully well in test on a local machine. You might even be running your own db and webserver on the exact same machine... but try that in the wild and before long you'll hit unreliability as an issue.
Listen, the simplist thing to do is wrap up ALL your parameters into ONE JS array and send that in ONE POST request. Then on the server do all your database selects and roll up the responses into ONE JSON/XML reponse.
At that point you are only ever waiting for ONE AJAX response. You can find all your data in the JSON/XML result.
Given that you are working with 100 requests you would probably be able to actually measure the time saving with a stopwatch!
Take it from me - do as few network requests as possible.