2

Is e-10 small enough to be considered zero, is it e-16, or some other value?

Or does it depend on the software package, or the application? Is there a general limit in numerical resolution below which there is no point in considering the output any more informative than zero?


EDIT:

Here is the source of the question: I posted this on MathSE, and what was supposed to be a "cheap computational proof" of the null spaces came out like this:

In Matlab:

A = [1 3; 1 2; 1 -1; 2 1];
rank(A);
[U,S,V] = svd(A);
left_null_A = transpose(U);
rows = (rank(A) + 1): size(left_null_A,1);
left_null_A = left_null_A(rows,:)
(left_null_A(1,:) + left_null_A(2,:)) * A
%ans =

%  -7.2164e-016  5.5511e-017

and

B = transpose(A);
rank(B);
[U,S,V] = svd(B);
right_null_B = transpose(V);
rows = (rank(B) + 1): size(right_null_B,1);
right_null_B(rows,:)
right_null_B = transpose(right_null_B(rows,:))
B * (right_null_B(:,1) + right_null_B(:,2))
%ans =

%  -7.7716e-016
%   2.7756e-016

In Python:

import numpy as np
A = np.matrix([[1,3], [1,2], [1, -1], [2,1]])
rank = np.linalg.matrix_rank(A)
U, s, V = np.linalg.svd(A, full_matrices = True)
t_U_A = np.transpose(U)
nrow = t_U_A.shape[0]
left_null_A = t_U_A[rank:nrow,:]
np.dot((left_null_A[0,:] + left_null_A[0,:]), A)
# Out[2]: matrix([[ -4.44089210e-16,  -2.10942375e-15]])

and

B = np.transpose(A)
rank = np.linalg.matrix_rank(B)
U, s, V = np.linalg.svd(B, full_matrices = True)
t_V_B = np.transpose(V)
ncols = t_V_B.shape[1]
right_null_B = t_V_B[:,rank:ncols]    
np.dot(B, (right_null_B[:,0] + right_null_B[:,1]))
# Out[3]: 
# matrix([[ -2.77555756e-16],
#         [ -1.38777878e-15]])

In R:

A = matrix(c(1,1,1,2,3,2,-1,1), ncol = 2)
r = qr(A)$rank            # Rank 2
SVD.A = svd(A, nu = nrow(A)) 
SVD.A$u    
t.U.A = t(SVD.A$u)
(left_null = t.U.A[(r + 1):nrow(t.U.A),])
colSums(left_null) %*% A
#             [,1]         [,2]
#[1,] 1.110223e-16 1.054712e-15

and

B = t(A)
r = qr(B)$rank  # Naturally it will also have rank 2.
SVD.B = svd(B, nv = ncol(B)) 
SVD.B$v    
(right_null = SVD.B$v[ ,(r + 1):ncol(B)])
B %*% rowSums(right_null)
#             [,1]
#[1,] 1.110223e-16
#[2,] 1.665335e-16

Very small numbers, indeed, but can I say they are 0 without resorting to the mathematics behind the calculations?

Antoni Parellada
  • 4,253
  • 6
  • 49
  • 114
  • 1
    epsilon varies depending on the value [What is the most effective way for float and double comparison?](http://stackoverflow.com/q/17333/995714). You shouldn't use a fixed epsilon http://floating-point-gui.de/errors/comparison/ – phuclv May 12 '17 at 03:35

2 Answers2

2

At some point you need to treat 10^-X as zero. Something like this:

 if (value == 0.0)

is going to cause nothing but trouble. For any kind of floating point comparison you need to define some delta

The size of that delta depends with the application. If you are doing molecular modeling, the mass of an election is about 10^-30 kg. In that case your delta needs to be much smaller than 10^-30.

In CAD applications, the delta is usually configurable on a per file basis. If you are doing a model of a building, measure in meters, the delta might be 0.0001 (1/10 mm). Using the same application to design a toy in meters, the delta might be 0.000001.

Note that if you make the delta too large in a CAD file, details get obliterated. If you make it too small, points that should be coincident are not.

Keep in mind the IEEE double format supports ranges down -1022. So even with a lot of rounding and error propagation you can make your delta quite small.

user3344003
  • 20,574
  • 3
  • 26
  • 62
1

If your question is about first-order optimality measure (optimizing first order derivative(s)), I would say that the answer largely depends on which scale you are working with. Assume you are working with an exogenous_gdp as constaint, whose numerical value can be around 1.e12, and that your numerical resolution consists of making an endogeneizing system, which produces, say, endonous_gdp, be equal to exogenous_gdp, as follows

x = -1. + endogenous_gdp/exogenous_gdp

Say that from this relative (ratio) equalization, one gets x=1.e-10. As you can see, in this case 1.e-10 is not so good since it means that endogenous_gdp is 1.e-10 greater than exogenous_gdp. And if exogenous_gdp equals 1.e12, your non-relative (difference) spread will be 1.e-10*1.e12=100. You could have done directly in non-relative terms, as

x = endogenous_gdp - exogenous_gdp

And this time x=1.e-10, non-relative, can be considered as not so bad, since the relative spread is 1.e-10/1.e-12=1.e-22.

A contrario, but in the same manner, if we were working with values around 1.e-8, the final obtained non-relative and relative differences would respectively be 1.e-10*1.e-8=1.e-18 and 1.e-10/1.e-8=1%. This time, this is the latter that is not so good.

Finally on this point, you will have to wonder if you need to rescale the problem. That beeing said, note that, e.g., Excel makes "small" numbers be 0 from ~1e-308, actually following the standard IEEE 754.

If you question is about convergence, then know that, e.g., python's scipy.optimize fsolve function uses 1.49012e-08 as convergence threshold. Reusing the previous example, this would consists of computing (in relative terms), as nth iteration

x(n) = -1. + endogenous_gdp(n)/exogenous_gdp

then, as (n+1)th iteration

x(n+1) = -1. + endogenous_gdp(n+1)/exogenous_gdp

to finally compute the relative error as

y = abs( -1. + x(n+1)/x(n) )

Which is considered as having converged if y lower than 1.49012e-08.

keepAlive
  • 6,369
  • 5
  • 24
  • 39