You certainly can do that. The C and C++ languages allow you do it. And it will most likely do what you want it to do.
However, the fact that you're using AVX means you care about performance. So it might be useful to know that this is one of the most common (performance) traps that SSE programmers fall into. (and many don't notice)
Problem 1:
Current compilers implement such a union using a memory location. So that's the first problem, every time you access the union from a different field, it forces the data to memory and reads it back. That's one slow-down.
Here's what MSVC2010 generates for (with optimizations):
eight_floats a;
a.a = vecA[0];
__m128 fvecA = a.b[0];
__m128 fvecB = a.b[1];
fvecA = _mm_add_ps(fvecA,fvecB);
vmovaps YMMWORD PTR a$[rbp], ymm0
movaps xmm1, XMMWORD PTR a$[rbp+16]
addps xmm1, XMMWORD PTR a$[rbp]
movaps XMMWORD PTR fvecA$[rbp], xmm1
movss xmm1, DWORD PTR fvecA$[rbp]
You can see that it's being flushed to memory.
Problem 2:
The second slow-down is even worse. When you write something to memory, and immediately access it with a different word-size, you will likely trigger a store-to-load stall. (typically on the order of > 10 cycles)
This is because the load-store queues on current processors aren't usually designed to handle this (unusual) situation. So they deal with it by simply flushing the queues to memory.
The "correct" way to access the lower and upper half of AVX datatypes is to use:
_mm256_extractf128_ps()
_mm256_insertf128_ps()
_mm256_castps256_ps128()
and family. Likewise for the other datatypes as well.
That said, it is possible that the compiler may be smart enough to recognize what you are doing and use those instructions anyway. (At least MSVC2010 doesn't.)