2

I wanted to take my first steps with Intel's SSE so I followed the guide published here, with the difference that instead of developing for Windows and C++ I make it for Linux and C (therefore I don't use any _aligned_malloc but posix_memalign).

I also implemented one computing intensive method without making use of the SSE extensions. Surprisingly, when I run the program both pieces of code (that one with SSE and that one without) take similar amounts of time to run, usually being the time of the one using the SSE slightly higher than the other.

Is that normal? Could it be possible that GCC does already optimize with SSE (also using -O0 option)? I also tried the -mfpmath=387 option, but no way, still the same.

plasmacel
  • 8,183
  • 7
  • 53
  • 101
Genís
  • 1,468
  • 2
  • 13
  • 24

2 Answers2

2

For floating point operations you may not see a huge benefit with SSE. Most modern x86 CPUs have two FPUs so double precision may only be about the same speed for SIMD vs scalar, and single precision might give you 2x for SIMD over scalar on a good day. For integer operations though, e.g. image or audio processing at 8 or 16 bits, you can still get substantial benefits with SSE.

Paul R
  • 208,748
  • 37
  • 389
  • 560
  • That may be the cause. I will try a single precision version. – Genís Aug 10 '11 at 16:25
  • 1
    OK - add the code and the command line to your question too though - there are so many simple things that you can get wrong when starting to work with SIMD. – Paul R Aug 10 '11 at 16:34
  • 2
    You were right, Paul R. The version that uses 32-bit integers gets an speedup of approximately 2 times faster. I suppose that in 16 and 8 bit operations the benefits would be even better. By the way, I deleted that square root operation in the integer version. Thanks a lot. – Genís Aug 11 '11 at 08:38
1

GCC has a very good inbuilt code vectorizer, (which iirc kicks in at -O0 and above), so this means it will use SIMD in any place that it can in order to speed up scalar code (it will also optimize SIMD code a bit too, if its possible).

its pretty easy to confirm this is indeed whats happening here, just disassemble the output (or have gcc emit commented asm files).

Necrolis
  • 25,836
  • 3
  • 63
  • 101
  • I checked the assembler code and I just see the pair of addps instructions I expected from the piece of code with explicit (at least) SSE. – Genís Aug 10 '11 at 16:23
  • 1
    I doubt that automatic vectorization comes into play at O0 (no optimization), as it's a very heavy optimization that should only kick in at O2 or O3. – Christian Rau Nov 10 '11 at 14:30
  • If you look at the gcc man page, it says that `-ftree-vectorize` is set by `-O3`. That's on Debian/Ubuntu, it might be different on other platforms. Careful, `-O0` is 0 optimization. Optimization starts at `-O1` – Hans-Christoph Steiner Dec 21 '12 at 16:18