2

I am having a for loop with 100,000 iteration - each iteration involving a simple distance calculation of some objects' position. This is all part of a sophisticated collision detection mechanism.

Now too many unnecessary iterations are not efficient and slows down the program. I would like to decrease the calculation time by using sorted vectors.

Therefore alternatively I thought of decreasing the iterations to a minimum by inserting referenced elements into a 2D vector which sorts the positions according to a "grid". Instead of 100,000 iterations, I would only have perhaps 1000 iterations with the same calculation, but involving now only particular "sector". However, the downside is perhaps that the 2D vector needs to be regularly updated with the objects grid or sector position by using push_back and erase, whenever an object changes its position.

My question regards the performance and not the code. Is updating a vectors by using erase and push_back quicker than using a brute force attempt of iteration? I need just a rough estimate if it is worth while pursuing this idea. Thanks.

Majte
  • 264
  • 2
  • 9
  • Code both, benchmark. (Honestly it's not clear from your code-less description what you're actually doing which makes it hard to do a theorethical analysis for you.) – millimoose Oct 04 '13 at 23:40
  • Yes, I guess I would have to do that. I was just wondering if there is a general rule to that. The loop is a simple for loop calculating distance between one object' position and another being iterated. The vector would involve a sorted list of these positions, which can be used to have a boundary for the iteration. But I guess you are right, that benchmarking it, is the only way to know for sure. Thanks. – Majte Oct 04 '13 at 23:47
  • Uh the general rule is doing complexity analysis as they teach you in algorithms class. Which is almost always explained with pseudocode, and as I said, really not easy to do on a prose description. (Since you want to see which operation repeats how many times and what its internal complexity is.) – millimoose Oct 04 '13 at 23:49
  • 2
    Yeah, this is really hard to say without some code to go off of, but here is some stuff to consider: (1) [branch predition](http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array) might help you a lot if you sort, (2) it might take a long time to sort your array, (3) it sounds like you are copying into another array which will increase your space complexity. – David says Reinstate Monica Oct 04 '13 at 23:52
  • For a complexity analysis I would need to know how "complex" are the internal calculations of a vector are that is being sorted by inserting and deleting elements. Given that I have an iteration of 100 simple calculations per frame, and I reduce the iteration to 10 per frame, but instead have a vector to manage that is being updated 10 times per frame by deleting and inserting elements, will this increase efficiency? – Majte Oct 04 '13 at 23:53
  • 1
    I guess I just do it and post the result... – Majte Oct 04 '13 at 23:55
  • @Majte The documentation for the vector should include that information. (At least the documentation for C# collections does.) – millimoose Oct 05 '13 at 00:03
  • If your element density and grid/closeness characteristics are suitable, you can get a reduction in complexity from the gridding - typically from O(n^2) all-pairs comparison to O(n) comparison with only the particles within close-enough neighboring grid cells. The cell-by-cell calculations should also run with better locality, since the working set size will be smaller for each one. However, that depends on setting the cell sizes according to your distance cutoff (slightly longer, or half that) and only comparing with sufficiently close neighboring cells. – Phil Miller Oct 05 '13 at 00:04
  • @Novelocrat Yes was thinking about comparing with sufficiently close neighboring cells depending on the max particle speed. This is what I was roughly having in mind. However, I have to update the particles locations in these cells, as all particles are in a constant flux. This would add to the complexity, no? – Majte Oct 05 '13 at 00:13
  • Yes, but it stays approximately linear in the number of particles in the system. Look up 'spatial decomposition' in reference to N-body simulation for detailed analysis. There are even more in-depth optimizations that are possible, such as sorting the particles along the axis of each pair of blocks, and only comparing pairwise until you get a too-far result, then breaking out. Note, also, that the sectors are a natural unit of medium-grain parallelism. – Phil Miller Oct 23 '13 at 02:41

2 Answers2

1

What you are looking for is binary space partitioning. This way for any given object, finding one colliding object is O(log N), where N is the number of objects. This should cut your collision detection cost from O(N2) to O(N log N).

Kuba hasn't forgotten Monica
  • 95,931
  • 16
  • 151
  • 313
0

If you're doing distance calculations between mostly static objects, you could use quadtrees to reduce the number of checks. If a lot of the objects are moving, then "loose quadtrees" are a better option.

Buddy
  • 10,874
  • 5
  • 41
  • 58