0

What motivates several implementations of the "same" underlying data primitives for float's, int's, double's, bool's, ... amongst TensorFlow, Numpy, PyTorch, JAX, ..., and Python itself? Furthermore, if tf.float64 is purely a wrapper around the Numpy equivalent, why do we have another wrapper under tf.experimental.numpy.float64 that's presumably going to be added in one of the next set of releases?

I can't very quickly produce an example, but I can recall from my deep learning course that friends and I would basically take turns having our nights ruined by issues with mixing them... random overflows and inaccuracies here and there that were eventually (at least seemingly) mended by swapping out np.float64 for tf.float64.

Should I expect more performance out of one over another? Are these to save from wasteful allocations, etc, for e.g. something like having Python's float in an np.ndarray and so on?

mmmeeeker
  • 138
  • 9
  • At least with `numpy` you (almost) never need to create a np.float64` object diectly. You may get one by indexing a `numpy` array. But even then you don't need to worry about it being different from a python float. I haven't installed `tensorflow` (too small a computer). – hpaulj Jul 17 '22 at 04:52
  • 1
    What you call 'implementations' are classes. A class not only has (a) data values, but also methods, many of which are inherited. `np.float64` inherits from a variety of more generic `numpy` classes, but also from base Python `float` (but `np.float32` does not). A `ndarray` does not "contain" floats (at least not in the sense of python lists). It has a databuffer - some sort of `c` array, that it can access byte by byte or in larger blocks like 8 byte doubles. Also a `np.float64` object has many (but not all) of the methods of a `ndarray`, while `float` has none. – hpaulj Jul 17 '22 at 15:36
  • So the point to the wrappers would be, purely, compatibility? I suppose my confusion lies somewhere in the neighborhood of "it seems reasonable to require that someone using pytorch/ tensorflow/ ... work with numpy; generate some error otherwise"; between my recollection of those errors -- which, all things considered, might just not be complete just given that it's been a minute -- and lack of understanding of python package management, this seems wasteful, so there must be some extra motivation. e.g. TF's experimental numpy wrapping? new API? or new functionality? – mmmeeeker Jul 17 '22 at 18:38
  • Presumably to understand `tf.experimental.numpy.float64`, one has to first understand the `experimental.numpy` API, https://www.tensorflow.org/api_docs/python/tf/experimental/numpy. I don't know how tensorflow is organized. The closest I get to `tf` is helping people understand why their "ragged arrays" can't be turned into `tensors`. – hpaulj Jul 17 '22 at 20:38
  • Related: [What is the difference between math.exp and numpy.exp and why do numpy creators choose to introduce exp again?](https://stackoverflow.com/q/30712402/4518341), [What is the difference between import numpy and import math](https://stackoverflow.com/q/41648058/4518341) – wjandrea Feb 26 '23 at 17:05

0 Answers0