0

I have written two different ways to transform Euler Angles to the normalized unit direction vector. But i'm not sure which one is the faster. The one which uses trigonometry operations or the one that transforms the forward vector through matrix?

D3DXVECTOR3 EulerToDir(D3DXVECTOR3 EulerRotation) { return D3DXVECTOR3(sin(EulerRotation.x)*cos(EulerRotation.y), -sin(EulerRotation.y), cos(EulerRotation.x)*cos(EulerRotation.y)); }//Convert euler angles to the unit direction vector.
D3DXVECTOR3 EulerToDirM(D3DXVECTOR3 EulerRotation)//Same thing but using matrix transformation. More accurate. 
{                    
    D3DXMATRIX rotMat;
    D3DXMatrixRotationYawPitchRoll(&rotMat, EulerRotation.x, EulerRotation.y, EulerRotation.z);

    D3DXVECTOR3 resultVec(0, 0, 1);//Facing towards the z.

    D3DXVec3TransformNormal(&resultVec, &resultVec, &rotMat);

    return resultVec;
}

Thanks.

2 Answers2

3

Well, what exactly do you care about? Memory usage like stated in the top level question? Or speed, as specified in the description?

If it's speed, the only real way to tell is measure it on your target architecture/environment. Trying to guess is usually a waste of time.

The easiest way to test performance of self-containted code snippets is to setup a unit test where you do something like this:

// setup everything first
time startTime = getCurrentTimeInMicros()
for (int i = 0; i < NUM_ITERATIONS; ++i)
{
    // code to be performance tested
}
time endTime = getCurrentTimeInMicros()

Then you can do endTime - startTime and see which code took longer to run.

If you need to test memory usage, you could print out sizeof() the classes/structs if they are simple, else you could allocate them while instrumenting your code with valgrind/massif.

fileoffset
  • 956
  • 6
  • 9
  • I care about both of them. Isn't the memory usage affects the performance? For example, it's faster to divide floats than doubles. Or I am wrong? –  Jul 14 '15 at 06:00
  • I don't think there is any significant difference in speed between dividing floats or doubles. In _general_ there is a _loose_ correlation between memory usage and performance, but they are by no means linked in lock-step. The way you allocate (and the size of the allocation), combined with the size of the data type you are working with, the cache size of the processor, the memory access/usage pattern - these all contribute to the speed of your program. Optimizing software is difficult due to all the 'moving parts', which is why I suggested measuring everything you care about. – fileoffset Jul 14 '15 at 06:40
0

You can do a complexity analysis on the functions, using the Big (O) notation. For your example, it uses predefined sine/cosine function which is system dependent and it's implemented in many different ways, for which the C++ decide which one is better for a particular x(input). Different implementations of sine.

You should try searching for the complexity of implementations of the matrix operations you performed on msdn, though I believe the EulerToDirM function uses matrix operations which are atleast O(N), and EulerToDir gives the result in O(1), which is better.

Community
  • 1
  • 1
phraniiac
  • 384
  • 1
  • 3
  • 16
  • Big(O) doesn't give you much, instead - measure with a profiler as @fileoffset says below. – milianw Jul 14 '15 at 09:09
  • @milianw I do agree that Big(O) is not machine specific. But from a theoritical point of view, this would be a good approach. – phraniiac Jul 14 '15 at 09:59
  • 2
    Yes, but it's purely theoretical. When you want to look at performance, you have to be practical. – milianw Jul 14 '15 at 16:33