1

Consider the following scenario:

  1. An object pool is created with a thousand (1000) objects of a moderate instantiation and initialization weight.
  2. Of those, 800 are used in the application for a while.
  3. Then, 200 are no longer used, so they are released and return to the pool.
  4. Not long after, 300 more need to be used, and are about to be taken from the pool...

Disregarding differences in code, and considering the system's automated memory and processor caching and other "hidden" or automatic processor/memory optimizations (such as branch-prediction); for the Pool's backend structure, is getting the most recently used object(s) (ie using a Stack) faster than getting the most inactive object(s) (ie using a Queue)?

In other words, will getting 200 "used" + 100 "new" (stack approach) objects from the pool be faster than getting 200 "new" + 100 "used" (queue approach) objects?

Yes, I'm aware this is probably overkill optimization... Just bear with me; I just think it's an interesting question! And I don't have the technical know-how to separate a speed difference from code (that should be disregarded) from a difference from those other factors.

Community
  • 1
  • 1
CosmicGiant
  • 6,275
  • 5
  • 43
  • 58

0 Answers0