5

Can I have a union like this

  union eight_floats_t
  {
    __m256 a;
    __m128 b[2];
  };
  eight_floats_t eight_floats;

to have an instant access to the two 128 bit parts of a 256 bit register?

Edit: I was asking to understand the performance impact of this approach.

Mysticial
  • 464,885
  • 45
  • 335
  • 332
Yoav
  • 5,962
  • 5
  • 39
  • 61
  • 2
    You certainly can. But if the compiler doesn't know how to optimize it, you will pay a performance penalty. – Mysticial Nov 01 '12 at 18:26

2 Answers2

11

You certainly can do that. The C and C++ languages allow you do it. And it will most likely do what you want it to do.

However, the fact that you're using AVX means you care about performance. So it might be useful to know that this is one of the most common (performance) traps that SSE programmers fall into. (and many don't notice)

Problem 1:

Current compilers implement such a union using a memory location. So that's the first problem, every time you access the union from a different field, it forces the data to memory and reads it back. That's one slow-down.

Here's what MSVC2010 generates for (with optimizations):

eight_floats a;
a.a = vecA[0];

__m128 fvecA = a.b[0];
__m128 fvecB = a.b[1];
fvecA = _mm_add_ps(fvecA,fvecB);

vmovaps YMMWORD PTR a$[rbp], ymm0
movaps  xmm1, XMMWORD PTR a$[rbp+16]
addps   xmm1, XMMWORD PTR a$[rbp]
movaps  XMMWORD PTR fvecA$[rbp], xmm1
movss   xmm1, DWORD PTR fvecA$[rbp]

You can see that it's being flushed to memory.

Problem 2:

The second slow-down is even worse. When you write something to memory, and immediately access it with a different word-size, you will likely trigger a store-to-load stall. (typically on the order of > 10 cycles)

This is because the load-store queues on current processors aren't usually designed to handle this (unusual) situation. So they deal with it by simply flushing the queues to memory.


The "correct" way to access the lower and upper half of AVX datatypes is to use:

  • _mm256_extractf128_ps()
  • _mm256_insertf128_ps()
  • _mm256_castps256_ps128()

and family. Likewise for the other datatypes as well.

That said, it is possible that the compiler may be smart enough to recognize what you are doing and use those instructions anyway. (At least MSVC2010 doesn't.)

Mysticial
  • 464,885
  • 45
  • 335
  • 332
  • It's worth noting that this shouldn't actually take a store-forwarding stall on current µarches; the 32B-store is cracked to two 16B store µops, each of which forward without hazard to the corresponding load op. That shouldn't take anything away from your general "don't do this" message, however. – Stephen Canon Nov 01 '12 at 20:55
  • That's good to know. I wasn't aware that was the case for Intels as well. Though I'd imagine in the future, the 32-byte stores will be probably become "native". – Mysticial Nov 01 '12 at 22:50
  • @Mystical: even once they're native, I expect forwarding to continue to work (Intel has actually put a good amount of effort into making forwarding work in all cases that aren't pathologically misaligned -- for example, recent µarches forward 16B stores to any smaller load that doesn't cross an 8B boundary, and to the obvious 16B load as well -- this is all documented in their Optimization Manual, by the way). – Stephen Canon Nov 01 '12 at 22:54
  • @Mysticial you said "Current compilers..." about 4 years ago. Is your answer still accurate? – J'e Nov 18 '16 at 15:19
  • @16num I haven't checked in a long time since I habitually avoid doing this. So I don't know. I imagine it's *possible* for the compiler to automatically generate the insert/extracts if the index is a compile-time constant. If not, it gets trickier. – Mysticial Nov 18 '16 at 15:42
  • 1
    @16num [I just tested this on GCC6.2 and ICC17.](https://godbolt.org/g/VmjwvE) The behavior hasn't changed at all since 4 years ago. It still goes through memory for both directions and will incur the store-to-load stall. So no, compilers haven't gotten better at this. – Mysticial Nov 18 '16 at 16:27
2

Yes, you can. Have you tried it?

Do be aware that the C standard says that it's unspecified behavior to access a member of a union which was not the one most recently written to -- specifically, if you write to one member and then read a different one, the other one has unspecified values (C99 §6.2.6.1/7). However, it is an extremely common idiom and is well-supported by all major compilers. As a practical matter, reading and writing to any member of a union, in any order, is acceptable practice (source).

Adam Rosenfield
  • 390,455
  • 97
  • 512
  • 589
  • Are you sure this is UB? The gcc manual actually recommends this pratice for avoiding type punned pointers – Gunther Piez Nov 01 '12 at 20:13
  • I tried it, but I wanted to understand its performance impact, as Mysticial assumed. Thanks. – Yoav Nov 01 '12 at 20:44
  • @hirschhornsalz: I took a closer look, and you're right—it's not UB. C99 §6.2.6.1/7 says "When a value is stored in a member of an object of union type, the bytes of the object representation that do not correspond to that member but do correspond to other members take unspecified values." – Adam Rosenfield Nov 02 '12 at 03:59
  • @AdamRosenfield I took a closer look too, and actually it seems it is no UB in C99 and UB in C++11, see http://stackoverflow.com/questions/11373203/accessing-inactive-union-member-undefined – Gunther Piez Nov 02 '12 at 08:10