-2

My question is the following

I have to perform operation with numbers in the range of [0,1]. Now suppose that i'm in the following situation:

double[] a = new double[100]();
// initialize a with random numbers
double b = 1;
for(int i=0; i<100;i++){
    b *= a[i];
}

would be better to do instead something like:

double[] a = new double[100];
// initialize a with random numbers
double[] A = new double[100];
for(int i=1;i<N;<i++){
   A[i] = log(a[i]);
}
double b =0;
for(int =0;i<100;i++){
   b += A[i];
}
b = exp(b);

The second proposal is stupid stuff, is just to highlight the problem. There is a "right way" to multiply values that are always from 0 to 1? As Medo42 pointed out, I think that the precision given by double would be good enough to work in the normal way, at least in my application (even if i'm not sure, cause in Q-Learning - for example - we use to do something that after all is similar to be S = X1 * Alpha + X2 * Alpha^2 + X3 * Alpha^3 + x4 * Alpha^4 ... where Alpha is a number between 0 and 1)

But in some others (like in setting weights in a ANN with a weight decay methods where weight would be small as possible) or in a generic framework where is used a Gradient Descend method...

So at least just for a personal curiosity i would like to know if there is a better way to perform this kind of computation.

Thanks to All, and i apologize to have written before this question in unclear way, i hope that now would be more clear.

Sam
  • 313
  • 4
  • 21
  • 2
    For precision, use `decimal`. – Patrick Hofman Feb 06 '15 at 14:38
  • 5
    Its hard to understand what you are actually asking. Can you try to update your question and make it clear, maybe add an example or some code? – Christoph Fink Feb 06 '15 at 14:41
  • 2
    I agree with ChrFin. I don't know if this post is about numerical precision or a way to outwardly represent some number in any range though it could be internally normalized to a range of 0-1. – Rick Davin Feb 06 '15 at 14:42
  • One suggestion I can offer is that, if you need to add many (*many*) floating point numbers together to get a final result, you might want to sort them first and start off with the smallest ones - that way you minimize rounding errors. Not sure if a similar rule can be given for multiplications.. – Medo42 Feb 09 '15 at 00:57

1 Answers1

0

Proportional scaling of your values should have no significant impact on your accuracy.

You can think of a float or double value as a number written with a fixed number of significant digits, only that it's done in binary instead of base 10. Just as is makes no difference to (relative) accuracy whether you write 1.234 (=1.234e0) or 12340 (=1.234e4), it also makes no difference to relative accuracy whether you scale your float or double values.

In other words, using a range of 0-1 is perfectly fine and you don't need to be concerned that you "lose" relative accuracy by not making use of the vast range of numbers above 1. As long as your numbers don't get REALLY close to 0 (as in, 1e-300 when using a double), you don't gain any accuracy by linearly scaling them larger.

Medo42
  • 3,821
  • 1
  • 21
  • 37
  • You're wrong. Floating points have the form a^b (both are signed) , which results in the range [Double.Max, 1] U [-1, Double.Min] having the same amount of descrite values as [1, -1] – AK_ Feb 07 '15 at 16:52
  • It's actually `a*2^b`, not `a^b`. You're right that the ranges you mention have roughly the same number of values. I fail to see how that contradicts my answer. – Medo42 Feb 09 '15 at 00:17
  • A double can represent any value in the "normal" double range (rougly 10^-307 to 10^308) with relative error of less than 0.000000000000012%. Values with a high mantissa have around twice that resolution, but using the range [0,1] already makes full use of that fact. – Medo42 Feb 09 '15 at 00:36