1

I have tabular data of acceleration values over time, like this example:

time(s) acc_x   acc_y
0.1 0   0
0.2 -0.98   1.66
0.3 1.42    1.72
0.4 -1.98   -0.3
0.5 -0.3    -0.79
0.6 -1.15   1.65
0.7 1.2 -0.5
0.8 1.97    0.51
0.9 -0.74   -0.39
1   -0.47   -1.06
1.1 1.77    0.87
1.2 -0.35   -0.67
1.3 1.4 0.51
1.4 1.72    1.47
1.5 -0.37   -0.83
1.6 1.65    -0.07
1.7 1.51    -0.53
1.8 -0.46   -0.8
1.9 -0.35   -0.18
2   0   0

From this, I want to calculate the position values by twofold integration to use the position coordinates as keyframed positions in blender. Because I do not always know the timebase of my input data, I want to resample it into the inter-frame time intervals.

This is what I tried so far, mainly trying to adapt this code sample: Rolling integral over pandas dataframe with time index

import pandas as pd
from scipy import integrate
    
cur_fps=25 #bpy.context.scene.render.fps
acc_table=pd.read_excel("C:/Temp/exampleaccelerations.xlsx",index_col=0) #read the table from disk
timedeltastrings={"interp":"%d"%(timedelta_base/100)+"us","vel":"%d"%(timedelta_base/10)+"us","pos":"%d"%(timedelta_base)+"us"}
acc_table_interp=acc_table.resample(timedeltastrings["interp"]).interpolate(method="linear")
vel_table=acc_table_interp.rolling(timedeltastrings["vel"]).apply(integrate.trapz)
vel_table_interp=vel_table.resample(timedeltastrings["vel"]).interpolate(method="linear")
pos_table=vel_table.rolling(timedeltastrings["pos"]).apply(integrate.trapz)
pos_table_interp=pos_table.resample(timedeltastrings["pos"]).interpolate(method="linear")

The code may not be especially tidy but works and gives results. However, the resulting values are way too high compared to a manual evaluation (eg. in Excel). I have absolutely no idea how even to draw a mental connection between the results and the input.

In case you wonder, the resampling is supposed to give the rolling integrator some values to work with. Without resampling and a window size of 100ms (analogous to my understanding of the answer linked above), the results of the integrations are all-zero dataframes.

Could anyone please point me in the direction of how to correctly use the scipy integrator (or any other equivalent function) so that I can get the correct results?

2 Answers2

5

You can use scipy.integrate.cumtrapz for numerical integration.

Assuming your data is stored in a pd.DataFrame df

from scipy.integrate import cumtrapz

velocities = cumtrapz(df[['acc_x','acc_y']], df['time(s)'], axis=0)
positions = cumtrapz(velocities, dx=0.1, axis=0)

To intepret the results of the integration you can plot position, velocity and acceleration

import matplotlib.pyplot as plt

plt.figure(figsize=(8,8))
plt.scatter(positions[:,0], positions[:,1], label='Position')
plt.quiver(
    positions[:,0], positions[:,1],
    velocities[1:,0], velocities[1:,1], color='blue',
    width=0.003, angles='xy', label='Velocity')
plt.quiver(
    positions[:,0], positions[:,1],
    df['acc_x'].iloc[2:], df['acc_y'].iloc[2:], color='green',
    width=0.003, angles='xy', label='Acceleration')
plt.legend();

Out:

position vs acceleration

Michael Szczesny
  • 4,911
  • 5
  • 15
  • 32
0

Here is how I would do it with my understanding of physics:

distance in x and y can be regarded separately, and the distance is the calculated on the Newton's law of movement:

s = 1/2 * a * t**2 + v0 * t + s0

first setup the dataframe (numpy array would be easier):

import pandas as pd

cols=['time','acc_x','acc_y']
dat=[  [0.1,0,0],
        [0.2,-0.98,1.66],
        [0.3,1.42,1.72],
        [0.4,-1.98,-0.3],
        [0.5,-0.3,-0.79],
        [0.6,-1.15,1.65],
        [0.7,1.2,-0.5],
        [0.8,1.97,0.51],
        [0.9,-0.74,-0.39],
        [1,-0.47,-1.06],
        [1.1,1.77,0.87],
        [1.2,-0.35,-0.67],
        [1.3,1.4,0.51],
        [1.4,1.72,1.47],
        [1.5,-0.37,-0.83],
        [1.6,1.65,-0.07],
        [1.7,1.51,-0.53],
        [1.8,-0.46,-0.8],
        [1.9,-0.35,-0.18],
        [2,0,0]]

df = pd.DataFrame(data=dat, columns=cols)

in general if delta t is varying (real / exact measurements) so I calculate delta t ('dt') as a helper column:

df['dt'] = df['time'].diff().fillna(0)

with starting conditions v0 = 0 and s0 = 0:

df['s_x'] = 0
df['s_y'] = 0
df['v_x'] = 0
df['v_y'] = 0

unfortunately the i-th value always depends on the i-1 th (lambda or iter doesn't work)

for i in range(1,len(df)):
    df['v_x'].loc[i] = df['v_x'].loc[i-1] + df['acc_x'].loc[i]*df['dt'].loc[i]
    df['v_y'].loc[i] = df['v_y'].loc[i-1] + df['acc_y'].loc[i]*df['dt'].loc[i]
    df['s_x'].loc[i] = df['s_x'].loc[i-1] + df['v_x'].loc[i-1]*df['dt'].loc[i] \
                     + 0.5*df['acc_x'].loc[i]*df['dt'].loc[i]**2
    df['s_y'].loc[i] = df['s_y'].loc[i-1] + df['v_y'].loc[i-1]*df['dt'].loc[i] \
                     + 0.5*df['acc_y'].loc[i]*df['dt'].loc[i]**2
    
xpos = round(df['s_x'].loc[len(df)-1],3)
ypos = round(df['s_y'].loc[len(df)-1],3)

print('final position: ',xpos,ypos)

vx_end = round(df['v_x'].loc[len(df)-1],3)
vy_end = round(df['v_y'].loc[len(df)-1],3)

print('final speed (x,y): ',vx_end,vy_end)

I hope this is closer to what you expect. Doing it this way provides where the object (s_x, s_y) is at every given point in time with the speeds (v_x, v_y).

Oliver Prislan
  • 320
  • 4
  • 12
  • That's what I was thinking about doing myself, but I was pretty sure that there had to be some pre-built function. Considering that those are usually much faster than solutions based on for-loops in python, I prefer the solution by @Michael Szczesny. Thank you anyway! – haarigertroll Oct 12 '20 at 08:53
  • 1
    yes, I like Michaels solution better too and it's also 100 times faster. – Oliver Prislan Oct 12 '20 at 10:50