3

I tried to port code some from the FANN Lib (neuronal network written in C) to SSE2. But the SSE2 performance got worse than the normal code. With my SSE2 implementation runs one run takes 5.50 min without 5.20 min.

How could SSE2 be slower than the normal run? Could it be because of the _mm_set_ps? I use the Apple LLVM Compiler (XCode 4) to compile the code (all SSE extension flags are on, optimization level is -Os).

Code without SSE2

                neuron_sum +=
                fann_mult(weights[i], neurons[i].value) +
                fann_mult(weights[i + 1], neurons[i + 1].value) +
                fann_mult(weights[i + 2], neurons[i + 2].value) +
                fann_mult(weights[i + 3], neurons[i + 3].value);

SSE2 code

                __m128 a_line=_mm_loadu_ps(&weights[i]);
                __m128 b_line=_mm_set_ps(neurons[i+3].value,neurons[i+2].value,neurons[i+1].value,neurons[i].value);
                __m128 c_line=_mm_mul_ps(a_line, b_line);
                neuron_sum+=c_line[0]+c_line[1]+c_line[2]+c_line[3];
Mysticial
  • 464,885
  • 45
  • 335
  • 332
martin s
  • 1,121
  • 1
  • 12
  • 29
  • 7
    If you look at the assembly, it should be pretty clear why it's slower. You'll probably be amazed at what `_mm_set_ps` compiles to. (So yes, you're right in suspecting `_mm_set_ps`.) – Mysticial Mar 26 '12 at 20:16
  • sry. It´s only a define. #define fann_mult(x,y) (x*y) – martin s Mar 26 '12 at 20:16
  • 1
    Relevant: http://stackoverflow.com/questions/4120681/how-to-mulitply-two-vectors-and-sum-the-resulting-vector-using-sse-intrinsic-fun – kennytm Mar 26 '12 at 20:33

1 Answers1

5

To have any chance of seeing a speedup here you need to do the following:

  • make sure weights[i] is 16 byte aligned and then use _mm_load_ps instead of _mm_loadu_ps
  • reorganise neurons[] so that it is SoA instead of AoS (and also 16 byte aligned) and then use _mm_load_ps to load 4 values at a time
  • move the horizontal sum out of the loop (there is a loop, right ?) - just keep 4 partial sums in a vector vneurom_sum and then do one final horizontal sum on this vector after the loop

Even then, you won't see a huge speed-up, as you're only doing one arithmetic operation for 2 loads and 1 store. Since most modern x86 CPUs have two scalar FPUs anyway you probably won't get close to the theoretical 4x speed-up for 128 bit float SIMD, I'd expect no more than, say, 50% speed up relative to scalar code.

Paul R
  • 208,748
  • 37
  • 389
  • 560
  • 1
    thanks for your answer. Yes there is a loop, i think it will be hard to reorganise the neurons because than I have to rewrite the whole framework. But the idea with the sum horizontal + vertical worked well and I could improve the performance to 4.30m – martin s Mar 26 '12 at 21:03
  • 1
    @user1293890 - glad it helped - *post hoc* SIMD optimisation such as this can be pretty tricky unless you're prepared to do major surgery on the relevant data structures. – Paul R Mar 26 '12 at 21:06