0

I am having a problem with Guava RateLimiter. I create the RateLimiter with RateLimiter.create(1.0) ("1 permit per second") and calls rateLimiter.acquire() on every cycle, but when I test run I get the following result:

Average: 1232.0 Diff since last: 2540
Average: 1180.0 Diff since last: 258
Average: 1159.0 Diff since last: 746
Average: 1151.0 Diff since last: 997
Average: 1144.0 Diff since last: 1004

Average is the number of milliseconds it sleeps on average and diff is the number of milliseconds passed since last print. On average it's okay, it does not permit my code to run more than once per second. But sometimes (as you can see) it runs more than once per second.

Do you have any idea why? Am I missing something?

The code that generates the above output:

private  int numberOfRequests;
private Date start;
private long last;
private boolean first = true;
private void sleep() {
    numberOfRequests++;
    if(first) {
        first = false;
        start = new Date();
    }

    rateLimiter.acquire();

    long current = new Date().getTime();
    double num = (current -start.getTime()) / numberOfRequests;
    System.out.println("Average: "+ num + " Diff since last: " + (current - last));
    last = current;
}
dimo414
  • 47,227
  • 18
  • 148
  • 244
Jen
  • 97
  • 2
  • 7
  • 1
    Please post an [MCVE](http://www.stackoverflow.com/help/mcve) of the code which generates this output. – Andy Turner Oct 20 '15 at 11:13
  • @AndyTurner I have updated the question with the code that produces the output. – Jen Oct 20 '15 at 11:19
  • 1
    OK, one thing that jumps out straight away is that you shouldn't use `System.getCurrentTimeMillis()` (which `new Date()` uses internally) to measure elapsed time - see KevinB's answer here http://stackoverflow.com/a/1776053/3788176 - you can't actually rely on it increasing monotonically. You should use `System.nanoTime()` instead. – Andy Turner Oct 20 '15 at 11:21
  • 1
    https://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/util/concurrent/RateLimiter.java There is quite a detailed description of the algorithm used in RateLimiter in the source code. – Andy Turner Oct 20 '15 at 11:22
  • I see! Thanks for pointing out the time problem. Will be useful in other projects as well! I have read the source code, didn't find the source of the problem though. Will try again! @AndyTurner – Jen Oct 20 '15 at 11:31
  • Your code is working fine in my workspace, since you have the same instance of `rateLimiter`. Can you provide the whole class? – renke Oct 20 '15 at 11:34
  • @JoaoMosquito It is basically just a while-loop, where I do a simple RESTful call after I have called my sleep method. If I just call my sleep method and remove the API calls it works perfectly fine, so it must be something with my calls and that they each take different time. – Jen Oct 20 '15 at 11:52
  • Then you must be using different instances of the class containing your `RateLimiter` for each call. – renke Oct 20 '15 at 18:21
  • There is only one instance. I run the class itself. I created my own implementation of a very simple rate limiter and that works fine too. @JoaoMosquito – Jen Oct 21 '15 at 08:53

1 Answers1

0

Your benchmark appears to be flawed - when I try to replicate it I see very close to one acquisition per second. Here's my benchmark:

public class RateLimiterDemo {
  public static void main(String[] args) {
    StatsAccumulator stats = new StatsAccumulator();
    RateLimiter rateLimiter = RateLimiter.create(1.0);
    rateLimiter.acquire(); // discard initial permit
    for (int i = 0; i < 10; i++) {
      long start = System.nanoTime();
      rateLimiter.acquire();
      stats.add((System.nanoTime() - start) / 1_000_000.0);
    }
    System.out.println(stats.snapshot());
  }
}

A sample run prints:

Stats{count=10, mean=998.9071456, populationStandardDeviation=3.25398397901304, min=989.303887, max=1000.971085}

The variance there is almost all attributable to the benchmark overhead (populating the StatsAccumulator and computing the time delta).

Even this benchmark has flaws (though I'll contend it's less-so). Creating an accurate benchmark is very hard, and simple whipped-together benchmarks are often either inaccurate or worse don't reflect the actual performance of the code being tested in a production setting.

Community
  • 1
  • 1
dimo414
  • 47,227
  • 18
  • 148
  • 244