1

Context: I'm writing an app in Java to loadtest a mooc webservice. I know other tools already exist but I need detailed report for each of my custom scenarios and it seems easier to generate them from my own app. In short, every thing is timed and I'm drawing graphs with things like: time until connection accepted, or response-time, etc. I need the number to be accurate (in proportions to each other).

Problem: I can start each connection in a new thread and run a scenario per thread. The drawback is that the number of threads is limited on my machine. So I need a better alternative.

Question: What can I do to start and run more connections than the allowed number of threads on my machine without using another machine ?

Idea I had: I could start and run every connection from a single thread. The thread would have a queue of actions to execute, and each time a method-call returns from the webservice, the callback would push a new event in the event queue. See pseudo code below.

Question: Would this idea induce a synchronization cost too high to have a proper response-time gaph ?


Code for What I have:

// simplified app code (omitting time measurement)
for each scenario
    start a new thread to run the scenario

// simplified scenario code (omitting time measurement)
repeat    
    start a method call
    wait until method response

// simplified callback code (omitting time measurement)
on response:
    notify scenario

Code for my idea:

// simpified app code (omitting time measurement)
repeat
    wait if actionQueue it empty
    otherwise pop first action   
    execute first action //could be a method call

// simplified callback code (omitting time measurement)
on response:
    given id of scenario that called the method
    push next action for this scenario into actionQueue
Julien__
  • 1,962
  • 1
  • 15
  • 25

1 Answers1

1

It is not an easy job to measure the things you've mentioned (time until connection accepted, response-time etc.) correctly. You should still consider using a Jmeter, or any other similar HTTP load generators combining them with access-log timestamps from your web-server to get the timing you need.

You would need to do that (i.e. combine the results from your load generator and access logs) anyway, since from the load generator side you would only see the total roundtrip time representing combined queuing, serving and network roundtrip time.

If you still think writing your own load generator is the best option, let's get back to your questions:

Would this idea induce a synchronization cost too high to have a proper response-time gaph?

You would have to measure that.

The biggest time cost would come from the number of responses you get simultaneously and the time you spent processing them, rather than from the synchronization. Having multiple threads hides this cost behind the context switching, but it still would be there. Depending on the response rate you would need to handle, having a dedicated thread per connection would be faster and would cost less than having multiple connections per thread. If you need to handle thousands of connections on one machine - than you'll benefit from your idea.

Anyway you would need to carefully check the difference between the serving time from you server combined with network roundtrip time and the response time from the load generator to understand how the response processing affects the response time depending on the load.

What can I do to start and run more connections than the allowed number of threads on my machine without using another machine ?

You can use NIO to run more connections, than threads. It will save you the idle time you spent waiting for the response. Though for handling multiple responses simultaneously you would still need multiple threads.

Have a look at netty, as it addresses the idea of non-blocking IO to handle thousands of connection with less threads.

P.S. the post How do I write a correct micro-benchmark in Java? is worth reading since most of the principles like warmup, timing and some others still apply on a macro level too.

Community
  • 1
  • 1
bashnesnos
  • 816
  • 6
  • 16