My question is the following
I have to perform operation with numbers in the range of [0,1]. Now suppose that i'm in the following situation:
double[] a = new double[100]();
// initialize a with random numbers
double b = 1;
for(int i=0; i<100;i++){
b *= a[i];
}
would be better to do instead something like:
double[] a = new double[100];
// initialize a with random numbers
double[] A = new double[100];
for(int i=1;i<N;<i++){
A[i] = log(a[i]);
}
double b =0;
for(int =0;i<100;i++){
b += A[i];
}
b = exp(b);
The second proposal is stupid stuff, is just to highlight the problem. There is a "right way" to multiply values that are always from 0 to 1? As Medo42 pointed out, I think that the precision given by double would be good enough to work in the normal way, at least in my application (even if i'm not sure, cause in Q-Learning - for example - we use to do something that after all is similar to be S = X1 * Alpha + X2 * Alpha^2 + X3 * Alpha^3 + x4 * Alpha^4 ... where Alpha is a number between 0 and 1)
But in some others (like in setting weights in a ANN with a weight decay methods where weight would be small as possible) or in a generic framework where is used a Gradient Descend method...
So at least just for a personal curiosity i would like to know if there is a better way to perform this kind of computation.
Thanks to All, and i apologize to have written before this question in unclear way, i hope that now would be more clear.