2

When I call tf.mean, I get an inaccurate result. The result is only slightly off, so I think there may be some kind of rounding issue. For example, for an array with numbers from 0 through 100,000,000 the mean comes out to 50,000,040 instead of 50,000,000.

I made sure to set the data type to float32 when creating the tensor. I also tried implementing similar code with python Tensorflow with reduce_mean, which gave the correct result.

const tf = require('@tensorflow/tfjs');
require('@tensorflow/tfjs-node');

const dataset = [];

for (let i = 0; i <= 100000000; i++) {
    dataset.push(i);
}

let tArr = tf.tensor1d(dataset, 'float32');
let tAvg = tArr.mean();
let avg = tAvg.dataSync()[0];
console.log(avg) // 50000040

The average should be 50,000,000 but instead I got 50,000,040.

Brett L
  • 105
  • 1
  • 11
  • Possible duplicate of [Tensorflow vs Tensorflow JS different results for floating point arithmetic computations](https://stackoverflow.com/questions/56649680/tensorflow-vs-tensorflow-js-different-results-for-floating-point-arithmetic-comp) and of [How can tensorflow do basic math with integer data larger than 32 bits?](https://stackoverflow.com/questions/57334172/how-can-tensorflow-do-basic-math-with-integer-data-larger-than-32-bits/57425824#57425824) – edkeveked Oct 27 '19 at 17:48

0 Answers0