What motivates several implementations of the "same" underlying data primitives for float's, int's, double's, bool's, ... amongst TensorFlow, Numpy, PyTorch, JAX, ..., and Python itself? Furthermore, if tf.float64
is purely a wrapper around the Numpy equivalent, why do we have another wrapper under tf.experimental.numpy.float64
that's presumably going to be added in one of the next set of releases?
I can't very quickly produce an example, but I can recall from my deep learning course that friends and I would basically take turns having our nights ruined by issues with mixing them... random overflows and inaccuracies here and there that were eventually (at least seemingly) mended by swapping out np.float64
for tf.float64
.
Should I expect more performance out of one over another? Are these to save from wasteful allocations, etc, for e.g. something like having Python's float
in an np.ndarray
and so on?