I'm thinking about writing a SIMD vector math library, so as a quick benchmark I wrote a program that does 100 million (4 float) vector element-wise multiplications and adds them to a cumulative total. For my classic, non-SIMD variation I just made a struct with 4 floats and wrote my own multiply function "multiplyTwo" that multiplies two such structs element wise and returns another struct. For my SIMD variation I used "immintrin.h" along with __m128, _mm_set_ps, and _mm_mul_ps. I'm running on an i7-8565U processor (whiskey lake) and compiling with: g++ main.cpp -mavx -o test.exe
to enable the AVX extension instructions in GCC.
The weird thing is that the SIMD version takes about 1.4 seconds, and the non-SIMD version takes only 1 second. I feel as though I'm doing something wrong, as I thought the SIMD version should run 4 times faster. Any help is appreciated, the code is below. I've placed the Non-SIMD code in comments, the code in it's current form is the SIMD version.
#include "immintrin.h" // for AVX
#include <iostream>
struct NonSIMDVec {
float x, y, z, w;
};
NonSIMDVec multiplyTwo(const NonSIMDVec& a, const NonSIMDVec& b);
int main() {
union { __m128 result; float res[4]; };
// union { NonSIMDVec result; float res[4]; };
float total = 0;
for(unsigned i = 0; i < 100000000; ++i) {
__m128 a4 = _mm_set_ps(0.0000002f, 1.23f, 2.0f, (float)i);
__m128 b4 = _mm_set_ps((float)i, 1.3f, 2.0f, 0.000001f);
// NonSIMDVec a4 = {0.0000002f, 1.23f, 2.0f, (float)i};
// NonSIMDVec b4 = {(float)i, 1.3f, 2.0f, 0.000001f};
result = _mm_mul_ps(a4, b4);
// result = multiplyTwo(a4, b4);
total += res[0];
total += res[1];
total += res[2];
total += res[3];
}
std::cout << total << '\n';
}
NonSIMDVec multiplyTwo(const NonSIMDVec& a, const NonSIMDVec& b)
{ return {a.x*b.x + a.y*b.y + a.z*b.z + a.w*b.w}; }