I'm having some issues with the experimental results of my function implementations, and I would like others to verify that the functions that I'm using are logically sound.
Context: For a certain programming/math problem, I need to calculate an average across a continuous interval to a given precision of 10 decimal places. The function is rather complicated and involves two dimensions, so I would prefer performing a sum approximation over calculating a continuous average (and therefore having to integrate the function in both dimensions) myself.
To help me out, I have been working on a set of approximation functions in Rust. They compute discrete averages of the function f
across an interval a..b
with a fixed increment delta
using Riemann sums, the trapezoidal rule, or Simpson's rule, depending on the implementation:
pub fn riemann_avg<F>(a: f64, b: f64, delta: f64, f: F) -> f64
where
F: Fn(f64) -> f64
{
let n = ((b - a) / delta) as usize;
(0..n)
.map(|i| (i as f64) * delta)
.map(f)
.sum::<f64>() / (n as f64)
}
pub fn trapezoidal_avg<F>(a: f64, b: f64, delta: f64, f: F) -> f64
where
F: Fn(f64) -> f64
{
let n = ((b - a) / delta) as usize;
(0..n)
.map(|i| (i as f64) * delta)
.map(f)
.collect::<Vec<f64>>()
.windows(2)
.map(|xs| xs[0] + xs[1])
.sum::<f64>() / (2.0 * (n as f64))
}
pub fn simpson_avg<F>(a: f64, b: f64, delta: f64, f: F) -> f64
where
F: Fn(f64) -> f64
{
let n = ((b - a) / delta) as usize;
(0..n)
.map(|i| (i as f64) * delta)
.map(f)
.collect::<Vec<f64>>()
.windows(3)
.map(|xs| xs[0] + 4.0 * xs[1] + xs[2])
.sum::<f64>() / (6.0 * (n as f64))
}
(Side note: The simpson_avg
function I've written above is actually an average over two offset applications of Simpson's rule, since it makes the program less complex. Currently, I don't believe that this is a part of the issue.)
I tested each method's error convergence as I brought delta
closer to zero, using several functions x.atan()
, x.powi(4)
, x
with the bounds of integration set from 0.0 to 1.0.
delta riemann_err trap_err simpson_err
(smaller is better)
x.atan():
0.1000000000 0.0396869230 0.0763276780 0.1136616747
0.0100000000 0.0039311575 0.0078330229 0.0117430951
0.0010000000 0.0003927407 0.0007851897 0.0011777219
0.0001000000 0.0000392703 0.0000785377 0.0001178060
0.0000100000 0.0000073928 0.0000113197 0.0000152467
0.0000010000 0.0000003927 0.0000007854 0.0000011781
0.0000001000 0.0000000393 0.0000000785 0.0000001178
x.powi(4):
0.1000000000 0.0466700000 0.0794750000 0.1081733333
0.0100000000 0.0049666670 0.0097696470 0.0145089140
0.0010000000 0.0004996667 0.0009976697 0.0014950090
0.0001000000 0.0000499967 0.0000999767 0.0001499500
0.0000100000 0.0000129997 0.0000179993 0.0000229989
0.0000010000 0.0000005000 0.0000010000 0.0000015000
0.0000001000 0.0000000500 0.0000001000 0.0000001500
x:
0.1000000000 0.0500000000 0.0950000000 0.1400000000
0.0100000000 0.0050000000 0.0099500000 0.0149000000
0.0010000000 0.0005000000 0.0009995000 0.0014990000
0.0001000000 0.0000500000 0.0000999950 0.0001499900
0.0000100000 0.0000100000 0.0000149999 0.0000199999
0.0000010000 0.0000005000 0.0000010000 0.0000015000
0.0000001000 0.0000000500 0.0000001000 0.0000001500
I expected Simpson's rule to converge the fastest out of all of these functions, but as you can see, it had the worst convergence rate, with Riemann sums performing the best.
To me, this makes no sense, especially with the simple polynomial examples where Simpson's rule clearly would provide a better (or at least equivalent) approximation. I'm guessing this means that there is either a very subtle problem with my function logic/formula, or I'm running into a floating-point precision error. I would love some help in diagnosing this problem.