Solved:
Because the offset was only wrong around extrema, where the derivative is close to 0, I figured it could have something to do with floating-point arethmatic.
I copied the derivative thats on Wikipedia
3(1-t)^2(P1 - P0) + 6t(1-t)(P2-P1) + 3t^2(P3-P2)
and it worked perfectly fine, even though it is mathematically equivalent to my derivative
3t^2(P3 - 3P2 + 3P1 - P0) + 6t(P2 - 2P1 + P0) + 3(P1 - P0)
Apparently, when calculated by Python, the average difference between them for t between 0 and 1 and P's between 0 and 700 is 0.049 (Calculated 2,000,000 times) and for t between 0 and 100 the average difference is a staggering 43.2.
Any corrections to how this error is produced would be most appreciated, as I'm not that knowledgeable in such things.
EDIT:
Turns out it's not floating-points to blame, but tuples! or maybe something else I dont know, any help would be appreciated.
from random import randint, uniform
def dt1(P0, P1, P2, P3, t):
return (3*(1-t)**2*(P1[0] - P0[0]) + 6*(1-t)*t*(P2[0] - P1[0]) + 3*t**2*(P3[0] - P2[0]),
3*(1-t)**2*(P1[1] - P0[1]) + 6*(1-t)*t*(P2[1] - P1[1]) + 3*t**2*(P3[1] - P2[1]))
def dt2(P0, P1, P2, P3, t):
return (t**2*3*(P3[0]-3*P2[0]+3*P1[0]-P0[0])+t*6*(P2[0]-2*P1[0]+P0[0])+3*(P1[0]- P0[0]),
t**2*3*(P3[1]-3*P2[1]+3*P1[1]-P0[1])+t*6*(P2[1]-2*P1[1]+P0[0])+3*(P1[1]- P0[0]))
def dt1_(P0, P1, P2, P3, t):
return 3*(1-t)**2*(P1 - P0) + 6*(1-t)*t*(P2 - P1) + 3*t**2*(P3 - P2)
def dt2_(P0, P1, P2, P3, t):
return t**2*3*(P3-3*P2+3*P1-P0)+t*6*(P2-2*P1+P0)+3*(P1- P0)
reps = 1_000_000
# with tuples t between 0 and 1
sum = 0
for i in range(reps):
Points = [(randint(0, 700), randint(0, 700)) for _ in range(4)]
t = uniform(0, 1)
dt_1 = dt1(*Points, t)
dt_2 = dt2(*Points, t)
sum += dt_1[0] - dt_2[0]
sum += dt_1[1] - dt_2[1]
print(f"for t between 0 and 1, with tuples, the average diffrence is {sum/(reps*2)}")
# with tuples, t between 0 and 100
sum = 0
for i in range(reps):
Points = [(randint(0, 700), randint(0, 700)) for _ in range(4)]
t = uniform(0, 100)
dt_1 = dt1(*Points, t)
dt_2 = dt2(*Points, t)
sum += dt_1[0] - dt_2[0]
sum += dt_1[1] - dt_2[1]
print(f"for t between 0 and 100, the average diffrence is {sum/(reps*2)}")
# without tuples, t between 0 and 1
sum = 0
for i in range(reps*2):
Points = [randint(0, 700) for _ in range(4)]
t = uniform(0, 1)
dt_1 = dt1_(*Points, t)
dt_2 = dt2_(*Points, t)
sum += dt_1 - dt_2
print(f"for t between 0 and 1, without tuples, the average diffrence is {sum/(reps*2)}")
# without tuples, t between 0 and 1
sum = 0
for i in range(reps*2):
Points = [randint(0, 700) for _ in range(4)]
t = uniform(0, 100)
dt_1 = dt1_(*Points, t)
dt_2 = dt2_(*Points, t)
sum += dt_1 - dt_2
print(f"for t between 0 and 100, without tuples, the average diffrence is {sum/(reps*2)}")
Results:
for t between 0 and 1, with tuples, the average diffrence is 0.10319855785147072
for t between 0 and 100, with tuples, the average diffrence is -21.841299912204903
for t between 0 and 1, without tuples, the average diffrence is -3.370475946951057e-17
for t between 0 and 100, without tuples, the average diffrence is -1.1170571903237891e-12