3

I would like to know how the gauss laguerre works for large limits. For example, I have a 2D function going from (0, +inf) in both dimensions. When I use gauss laguerre in python by sampling the function with weights and abscissas by summing them up, I don't get something close to what I get using, say, dblquad. Below is a sample code for integration. lgw outputs the weights and abscissas which are then used in the double integration by using two for loops. I do not see how a sample point like x, y = 1e8, 1e8 is captured by this. Increasing n doesn't give high abscissas (at least not very high as required).

kzas,kzws = lgw(n)
for kta,ktw, in zip(kzas,kzws):
   for kza,kzw in zip(kzas,kzws):
      fval = integrand(kza,kta)
      wghtx = kzw*numpy.exp(kza)
      wghty = ktw*numpy.exp(kta)
      integral += wghtx*wghty*fval

Can someone explain how to capture the higher sample points? Am I not using the quadrature correctly? I can integrate functions with small limits say 1e2 or so. What to do if the limit is high say 1e15? I see the definition from theory but I do not see how the higher weights and abscissas are captured.

Thanks

Edit: It is not possible to reduce my function any further. The different parts of integrand are computed numerically so I don't have any analytical expression. All I can say is that function is smooth and has a sinusoidal behaviour.

user3840530
  • 290
  • 1
  • 4
  • 14
  • What is the behavior of your function as the arguments approach `+inf`? I have a doubt that you can never get a good numeric approximation if you just sample "far enough" points, and the behavior of the function beyond that "far enough" is somehow non-trivial. Can you define a radius where you apply the numeric method, and estimate the rest analytically? – 9000 Jan 24 '17 at 16:23
  • @9000 The function is nonzero even for values like 1e10 and goes to zero only after 5e10. in between it varies between 0 and 1 with in a sinusoidal way (not always, but I can say it has that kind of variation). It doesn't go very high in function values and doesn't have any singularities. – user3840530 Jan 24 '17 at 16:25

1 Answers1

0

If I read this correctly, the roots of the nth Laguerre polynomial are bounded by

n + (n-1) sqrt(n)

meaning that you'd have to go to insanely high degree to sample from the more remote points in your integrand.

What you could try if your integrand doesn't oscillate too quickly is to rescale the axes, I suppose. More specifically you can adjust the support of your integrand using

\lambda \int_0^\infty f(\lambda x) dx = \int_0^\infty f(x) dx

In your case you'll probably want to use a rather large \lambda.

To be even more specific, try replacing the first line in your innermost loop with

  fval = lam*lam * integrand(lam*kza, lam*kta)
Paul Panzer
  • 51,835
  • 3
  • 54
  • 99
  • It is the main purpose of the method to integrate functions taking limits as (0, inf) easily. Having a limitation of requirement of a high degree would defy the purpose. Especially when such a high degree is not available through built in programs. I am not saying that it cannot be calculated. There should be a way of making it work for such a limit as people have been using the methodolgy for long. – user3840530 Jan 24 '17 at 17:58
  • Well, for an improper integral of that kind to exist in the first place the integrand has to decay fast enough eventually. I have to admit I have no intuition for where the particular length scales in the Gauss-Legendre come from, but they are as they are. And they seem to ill fit the scale of your integrand. But as I said you could try and rescale using c int_0^infty f(cx) dx = int_0^infty f(x) dx. If you choose c large enough you can bring the support of your integrand into a manageable range. – Paul Panzer Jan 24 '17 at 18:29
  • I tried your last suggestion of using lambda. I get larger values than without using lambda, but they are still below what I get using dblquad. It can still be right as it could be that dblquad is calculating it wrong. If I try to play around with lambda by increasing it, I get overflow errors in exponentials (nan etc.). So at max, I can go to 1e9 for lambda. If I use lower value of lambda, I get very small values of integrand. I wonder if there is another way of transforming such an integral. Thanks for the suggestion. – user3840530 Jan 26 '17 at 12:57
  • where exactly do you get the overflow? in your integrand function? just to be sure: you must stretch kza and kta in the first, but *not* in the seconod and third line – Paul Panzer Jan 26 '17 at 14:21
  • Yes, I made that mistake but corrected it soon. I get the overflows inside the integrand itself. – user3840530 Jan 26 '17 at 14:46
  • Btw. while you are in principle right in that dblquad could be off, I doubt it will be worse than Gauss-Laguerre. I think it uses quadpack under the hood which has lots of smart little tricks such as adaptive subdivision and lots of field testing. Did you have a look at the error estimate it returns? Also, are you using a large enough n? Your integrand seems to be a difficult one and you should use enough nodes to capture all its wiggles. – Paul Panzer Jan 26 '17 at 14:48
  • I had a look inside dblquad and error estimates. I know it uses adaptive quadrature and would be better in general than coding everything from scratch, and that's why I like to stay away from designing custom methods unless I am really sure. I also tried breaking down the integral limits into parts and then summing up and, surprisingly, I get same result from breaking down integral and computing all at once with dblquad. The only thing that changes is error which is less in case of breaking down the integral. Error estaimate can be 1e11 ( that's confusing as well). – user3840530 Jan 26 '17 at 14:58
  • That's the absolute error, right? How does it compare to the value of the intragral? – Paul Panzer Jan 26 '17 at 15:15
  • Yes. That's what I get from dblquad as error (1e11). it is much smaller as compared to the integral value (~ 1e19). But it worries me if something is not right with the routine itself as I am dealing with large limits. I am looking at the integral as well to find if something is wrong there. – user3840530 Jan 26 '17 at 15:28
  • I wish I could be of more help, but that's about all I could think of. The dblquad looks ok to me; the relative error is as good as one could expect and your cutting-up checks give consistent results. – Paul Panzer Jan 26 '17 at 15:50
  • Thanks for your comments. – user3840530 Jan 26 '17 at 15:56