I have a image represented by std::complex<double>* c1
and I want to convert it to double*
of twice the size (array size not data size). The complex image comes from some previous FFT operations. I need to do some images pixel-wise additions
so I think representing the data using double*
would be more convenient. If you want to ask why I have to do that since I want to use CUDA to do the additions and complex addition in kernel code doesn't really support atomic operation (Or, is there any that you know?:> ). So adding double will hopefully give the same result.
I have tried comparing the types of std::complex<double>
and double
. For example, using the function of printing binary format of a variable provided by this link, I print two numbers:
double a1 = 1.0;
complex<double> a2(1.0, 1.0);
printBits(sizeof(double), &a1);
printBits(sizeof(complex<double>), &a2);
which gives
0011111111110000000000000000000000000000000000000000000000000000
00111111111100000000000000000000000000000000000000000000000000000011111111110000000000000000000000000000000000000000000000000000.
So the second one is the exact two copies of the first one, which sounds to me that double complex is just concatenating 2 double's together.
So for my image, I used the following two versions of conversion (suppose my image is stored in a complex<double>
array "image"):
// C type cast
double* C_image = (double*)image;
// Reinterpret cast
double* reinterpret_image = reinterpret_cast<double*>(c1);
Both results seem to be correct. Since I am doing a lot of this type of conversions in my image process code, I really want to make sure the conversion is well supported in "theory". So my question would be are the above conversions valid? Or just kind of hacky and might result in some bug in some situations? Which conversion is better, in terms of both performance (if any) and robustness?