0

I'm solving Laplace's Equation with Gauss-Seidel method, but in some regions, its showing a plateau-like aspect. Formally, ie, by numerical analysis, such regions should not exist, even if the gradient is almost zero.

I'm forced to believe that double precision isn't enough to perform the arithmetic and that a big number library need to be used (killing the performance, since now it will be done by software). Or, that I should do the operations in a different order, aiming to preserve some significance to the decimals.

Example

Cell (13, 14, 0) is being updated by 7-point mesh (in 3D), and its neighbours are:

(12,14,0)=  0.9999999999999936; // (x-)
(14,14,0)=  0.9999999999999969; // (x+)
(13,13,0)=  0.9999999999999938; // (y-)
(13,15,0)=  1.0000000000000000; // (y+)
(13,14,-1)= 1.0000000000000000; // (z-)
(13,14,1)=  0.9999999999999959; // (z+)

So, the new value of cell (13,14,0) would be evaluated as:

p_new = (0.9999999999999936 + 0.9999999999999969 + 0.9999999999999938 + 1.0000000000000000 + 1.0000000000000000 + 0.9999999999999959) / 6.0 ;

which leads to p_new being 1.0000000000000000, when it should be 0.9999999999999966.

Code

#include <stdio.h>

int main()
{
    double ad_neighboor[6] = {0.9999999999999936, 0.9999999999999969,
                              0.9999999999999938, 1.0000000000000000,
                              1.0000000000000000, 0.9999999999999959};

    double d_denom = 6.0;

    unsigned int i_xBackward=0;
    unsigned int i_xForward=1;

    unsigned int i_yBackward=2;
    unsigned int i_yForward=3;

    unsigned int i_zBackward=4;
    unsigned int i_zForward=5;

    double d_newPotential = (ad_neighboor[i_xForward] + ad_neighboor[i_xBackward] +
                             ad_neighboor[i_yForward] + ad_neighboor[i_yBackward] +
                             ad_neighboor[i_zForward] + ad_neighboor[i_zBackward] ) / d_denom;

    printf("%.16f\n", d_newPotential);
}
Community
  • 1
  • 1
  • Maybe [this](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) helps somewhat. – Jabberwocky Jan 26 '18 at 15:57
  • 2
    Show us your code and we might be able to tell you how to improve it. – dbush Jan 26 '18 at 15:57
  • 1
    Cannot reproduce: I get `p_new` as `0.9999999999999966` from `printf("%.16f\n", p_new);` – Weather Vane Jan 26 '18 at 16:00
  • 1
    BTW, you don't need the line continuation characters. – Bathsheba Jan 26 '18 at 16:07
  • 2
    The differences are very small compared to the magnitudes, resulting in loss of significance. Rearrange the formula so that the small numbers can show through: `double adjustment = ad_neighboor[i_xForward]; d_newPotential = ((ad_neighboor[i_xForward] - adjustment) + (ad_neighboor[i_xBackward] - adjustment) + (ad_neighboor[i_yForward] - adjustment) + (ad_neighboor[i_yBackward] - adjustment) + (ad_neighboor[i_zForward] - adjustment) + (ad_neighboor[i_zBackward] - adjustment))/d_denom) + adjustment;` Better would be to change coordinate system so all the coordinates are close to zero. – Raymond Chen Jan 26 '18 at 19:48
  • @dbush I updated the code! – Marcelo Silva Jan 27 '18 at 17:04
  • @RaymondChen thanks! I'll try that! All values are inside (0,1). Only in the boundary I have 0 or 1, or should have. Near 0, the values are approximately 0.6... . Maybe if I used a log-scale!? – Marcelo Silva Jan 27 '18 at 17:04
  • Log scale doesn't help with loss of significance when adding a very small number to a very large number. – Raymond Chen Jan 27 '18 at 17:48
  • I think that your x+ point should be 14,14,0. Is it a typo or a mistake in your code? – Sourabh Bhat Jan 28 '18 at 08:42
  • @SourabhBhat that really was a typo! Thanks! – Marcelo Silva Jan 29 '18 at 14:04

2 Answers2

2

Since you are solving:

d²(phi)/dx² + d²(phi)/dy² = 0

Instead you can solve the equivalent problem:

d²(phi')/dx² + d²(phi')/dy² = 0

Where, phi' = phi - 1.

Remember to apply the boundary conditions in terms of phi'.

Finally after the solution has converged, you can get the solution as phi = 1 + phi'.

I am assuming here that the boundary values are close to 1.

I haven't tried this, but I think that the numbers will be represented in their significant digits in a floating point notations, thus the truncation error will be reduced.

Toby Speight
  • 27,591
  • 48
  • 66
  • 103
Sourabh Bhat
  • 1,793
  • 16
  • 21
0

Your granularity is too fine for the double precision floating point type on your platform.

In the majority of cases, you'd address this by adjusting your granularity. If you need any convincing, 15 significant figures of granularity is enough to mesh the Solar system out to the orbit or Pluto in squares of 1cm length! For this method, I'd be inclined to reserve at least four orders of magnitude to obviate numerical noise.

Only in a very minimal number of cases ought you think in terms of switching to another data type, such as as long double (if different to double to your platform), or an arbitrary precision type.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483
  • I'm not scaling it to the real world... yet. But my mesh is really small... like 20x20x20 cells. I just have one point in the boundary that is 0. All other parts of the boundary are 1. So, that leads to a lot of inner regions being close to 1, however, when far from that 0 boundary point, cells goes quickly towards 1, and sometimes form that plateau-like region that I mentioned. – Marcelo Silva Jan 27 '18 at 17:09