I am working with small numbers in tensorflow
, which sometimes results in numerical instability.
I would like to increase the precision of my results, or at the very least determine bounds on my result.
The following code shows a specific example of numerical errors (it outputs nan
instead of 0.0
, because float64
is not precise enough to handle 1+eps/2
):
import numpy as np
import tensorflow as tf
# setup
eps=np.finfo(np.float64).eps
v=eps/2
x_init=np.array([v,1.0,-1.0],dtype=np.float64)
x=tf.get_variable("x", initializer=tf.constant(x_init))
square=tf.reduce_sum(x)
root=tf.sqrt(square-v)
# run
with tf.Session() as session:
init = tf.global_variables_initializer()
session.run(init)
ret=session.run(root)
print(ret)
I am assuming there is no way to increase the precision of values in tensorflow. But maybe it is possible to set the rounding mode, as in C++ using std::fesetround(FE_UPWARD)
? Then, I could force tensorflow to always round up, which would make sure that I am taking the square root of a non-negative number.
What I tried: I tried to follow this question that outlines how to set the rounding mode for python/numpy. However, this does not seem to work, because the following code still prints nan
:
import numpy as np
import tensorflow as tf
import ctypes
FE_TONEAREST = 0x0000 # these constants may be system-specific
FE_DOWNWARD = 0x0400
FE_UPWARD = 0x0800
FE_TOWARDZERO = 0x0c00
libc = ctypes.CDLL('libm.so.6') # may need 'libc.dylib' on some systems
libc.fesetround(FE_UPWARD)
# setup
eps=np.finfo(np.float64).eps
v=eps/2
x_init=np.array([v,1.0,-1.0],dtype=np.float64)
x=tf.get_variable("x", initializer=tf.constant(x_init))
square=tf.reduce_sum(x)
root=tf.sqrt(square-v)
# run
with tf.Session() as session:
init = tf.global_variables_initializer()
session.run(init)
ret=session.run(root)
print(ret)