7

Where does the x86-64's SSE instructions (vector instructions) outperform the normal instructions. Because what I'm seeing is that the frequent loads and stores that are required for executing SSE instructions is nullifying any gain we have due to vector calculation. So could someone give me an example SSE code where it performs better than the normal code.

Its maybe because I am passing each parameter separately, like this...

__m128i a = _mm_set_epi32(pa[0], pa[1], pa[2], pa[3]);
__m128i b = _mm_set_epi32(pb[0], pb[1], pb[2], pb[3]);
__m128i res = _mm_add_epi32(a, b);

for( i = 0; i < 4; i++ )
 po[i] = res.m128i_i32[i];

Isn't there a way I can pass all the 4 integers at one go, I mean pass the whole 128 bytes of pa at one go? And assign res.m128i_i32 to po at one go?

pythonic
  • 20,589
  • 43
  • 136
  • 219
  • 1
    Basically, whenever you have an extremely high computation/load-store ratio. – Mysticial Apr 25 '12 at 10:03
  • 2
    Yeah, you definitely don't want to use `_mm_set_epi32()` like that. Use `_mm_load_si128()`. And if you can't align the data, you can use `_mm_loadu_si128()` at a performance penalty. – Mysticial Apr 25 '12 at 10:11
  • 1
    Align the data? What do u mean by that? – pythonic Apr 25 '12 at 10:12
  • SSE aligned load/stores require that the address be aligned to 16 bytes. If you can't guarantee that, you can use misaligned load/stores - but at a performance penalty. – Mysticial Apr 25 '12 at 10:13
  • 2
    I have numerous examples of SSE and AVX showing massive speedup over normal code. But they're all too large to be posted here. A good rule of thumb is to have at least 3 - 4 operations for every memory access. In your example, even if you used `_mm_load_si128()` and `_mm_store_si128()` properly, you have 1 operation for 3 memory accesses. That's why you aren't getting any speedup. – Mysticial Apr 25 '12 at 10:18
  • Sure it would be very helpful to post an example. By the way would pragma pack be good to align an array to 16 byte boundaries? Because I'm now getting exception when I use _mm_loadu_si128. – pythonic Apr 25 '12 at 10:21
  • I'll see if I can dig up a simpler example. Here's a reference on how to align data: http://software.intel.com/sites/products/documentation/studio/composer/en-us/2011/compiler_c/intref_cls/common/intref_bk_data_align.htm – Mysticial Apr 25 '12 at 10:23
  • Yes its working now, but still slower than the original, although now close to it, previously it was much slower. But since you said it requires at least 3-4 operations to make it faster, could you give some example. Code is not necessary, just an example of operations. – pythonic Apr 25 '12 at 10:30
  • [My answer here](http://stackoverflow.com/a/8391601/922184) has a notorious contrived example of achieving 95% of a processor's maximum performance using SSE. In a lot of cases, you need to combine the high computation/memory-access ratio with loop-unrolling to get massive speedups over normal code. I'll post an answer in a few minutes with links to more examples on SO. – Mysticial Apr 25 '12 at 10:34

1 Answers1

10

Summarizing comments into an answer:

You have basically fallen into the same trap that catches most first-timers. Basically there are two problems in your example:

  1. You are misusing _mm_set_epi32().
  2. You have a very low computation/load-store ratio. (1 to 3 in your example)

_mm_set_epi32() is a very expensive intrinsic. Although it's convenient to use, it doesn't compile to a single instruction. Some compilers (such as VS2010) can generate very poor performing code when using _mm_set_epi32().

Instead, since you are loading contiguous blocks of memory, you should use _mm_load_si128(). That requires that the pointer is aligned to 16 bytes. If you can't guarantee this alignment, you can use _mm_loadu_si128() - but with a performance penalty. Ideally, you should properly align your data so that don't need to resort to using _mm_loadu_si128().


The be truly efficient with SSE, you'll also want to maximize your computation/load-store ratio. A target that I shoot for is 3 - 4 arithmetic instructions per memory-access. This is a fairly high ratio. Typically you have to refactor the code or redesign the algorithm to increase it. Combining passes over the data is a common approach.

Loop unrolling is often necessary to maximize performance when you have large loop bodies with long dependency chains.


Some examples of SO questions that successfully use SSE to achieve speedup.

Community
  • 1
  • 1
Mysticial
  • 464,885
  • 45
  • 335
  • 332