I am trying to use quad precision in gfortran, but it seems like the real*16
does not work. After some fishing around, I have found that it may be implemented as real*10
. Is real*10
actually quad precision?
How can I test the precision of my code? Is there a standard simple algorithm for testing precision? For example, when I want to figure out what computer zero is, I continue to divide by 2.0 until I reach 0.0. Keeping track of the values lets me know when the computer 'thinks' that my non-zero number is zero - giving me computer zero.
Is there a good way of figuring out the precision with a type of algorithm like I described?