I am playing around with using numpy to simulate N-dimensional space. Note that I'm not seriously trying to make something that is efficient compared to existing software, I'm more just looking to learn something here.
That said, I'm still curious about the best way to design this algorithm.
Spatial simulation tends to require quite a lot of normalization calculations.
So, lets suppose that, to process 1 second of simulation, the computer needs to do 100 normalization calculations.
Numpy is capable of normalizing a large number of vectors at once. And I am guessing that it would be much faster to run one calculation of 100 norms then it would be to run 100 calculations for 1 norm each.
Would it make sense to keep a global list of "vectors to normalize", and then process them all at once at the end of each second of simulation? Or are the benefits of that approach not really significant?
I am guessing that this depends on exactly how much overhead is associated with running the calculations. Am I on the right track here?