Context:
In Octave, I have written code for a Sigmoid function that returns values between 0 and 1; in an ideal world, it would only return 0 for -Inf and 1 for +Inf but due to floating point imprecision, values that are extremely close to either of these are rounded.
The Question:
My question is why the following occurs: The boundary for rounding is clearly different for 0 vs. 1:
>> sigmoid(-709)
ans = 1.2168e-308
>> sigmoid(-710)
ans = 0
>> sigmoid(36)
ans = 1.00000
>> sigmoid(37)
ans = 1
>> (sigmoid(37)-1)==0
ans = 1
>> (sigmoid(36)-1)==0
ans = 0
>> sigmoid(-710)==0
ans = 1
>> sigmoid(-709)==0
ans = 0
In the example, one can see that the value needed to round the output to 1 is MUCH smaller in magnitude than that needed to round to 0. 37 compared with -710 is a very large discrepancy considering they should be the same in magnitude but with opposite signs...
My Code:
Perhaps it's an issue with my function:
function [z] = sigmoid(x)
z = 1.0 ./(1.0+exp(-x));
endfunction
What I've Tried:
Another point, is that I changed the function to add 1 to the result (essentially translating the graph up by 1), and the boundaries became +/-37 for 2 and 1 respectively - this makes me think it really is to do with 0 in particular and not just the function and its lower bound in particular.
If it's something to do with my computer then what would cause such a thing?