I know that you can use:
#define _USE_MATH_DEFINES
and then:
M_PI
to get the constant pi. However, if I remember correctly (comments welcome) this is compiler/platform dependent. So, what would be the most reliable way to use a pi constant that won't cause any problems when I port it from Linux to other systems?
I know that I could just define a float/double and then set it to a rounded pi value myself, but I'd really like to know if there is a designated mechanism.