4

Possible Duplicate:
Getting displacement from accelerometer data with Core Motion
Android accelerometer accuracy (Inertial navigation)

I am trying to use core motion user acceleration values, and double integrating them to derive distance covered. I move my iPhone linearly along its Y axis, against a 30 cm log ruler, on the table. First, I let the device be at rest for 10 seconds, and I calculate my offsets along the three axes, by averaging the respective user acceleration values. The X, Y and Z offsets are subtracted from the acceleration values, when I try calculating the distance covered. After offset subtraction, these values are passed through a low pass filter and a median filter, separately of course. The filters are linear filters, and the cut-off frequency is specified by the number of neighbouring values whose mean is taken in low pass, and median in the median filter. I have experimented with varying values of this number from 1 to 100. In the end, these filtered values are double integrated using trapezoidal rule to get distances. But, the distance calculated is no where close to 30 cm. The closest value I got was some -22 cm(I am wondering why I am getting negative values even though I move the device in positive Y direction). I also came across this: http://ajnaware.wordpress.com/2008/09/05/accelerating-iphones/ its an old post about the same thing, which says that the accelerometer readings returned appeared to come in quanta of about 0.18m/s^2 (ie. about 0.018g), resulting in a large cumulative error very quickly. Going by that, for this error to really not matter, one will have to accelerate the device by almost 1.8m/s^2, which is practically impossible for distance/length measurement purposes. for small movements, it does not look like there is a possibility of calculating distances by using an optimal filter and a higher order numerical integration method, without an impractical velocity/acceleration constraint like that. Is it possible? How about using my acceleration vs timestamp data to interpolate a polynomial that grows over time, as I get more and more motion updates, which represents approximately an acceleration vs time curve. Double integration of ths polynomial would be a piece of cake. But, for small distances, the polynomial will have a big error component. Using a predictable known motion that my device will be subjected to, I wish to take a huge number of snapshots (calculated distance vs actual known distance) to calculate my error polynomial in a similar way, and then subtract it from my first polynomial. Can this work?

Community
  • 1
  • 1
Sonu Jha
  • 719
  • 3
  • 13
  • 23
  • 2
    It's a good question, but it would be great to see your accompanying code... – foundry Dec 20 '12 at 09:14
  • @SonuJha The above question is about Android but the same holds for ios core motion. In short, no, you cannot do the double integral, these sensors are not accurate enough. – Ali Dec 20 '12 at 09:29
  • Here is another one: [Need to find Distance using Gyro+Accelerometer](http://stackoverflow.com/questions/6647314/need-to-find-distance-using-gyroaccelerometer). At the bottom line it's impossible because of horrible errors. – Kay Dec 20 '12 at 09:30
  • Thanks He Was for replying back! I am three C style arrays XA[10000], Ya[10000], and ZA[10000], in which i store the acceleration values, on every motion update (deviceMotionUpdateInterval set to 0.01). When the device is at rest, some 1000 values along the the respective axes are averaged, to get the three offsets. The next three arrays XA_F YA_F and ZA_F are filled up with values from the filter, after multiplying them with 981 (since I am measuring in cm). – Sonu Jha Dec 20 '12 at 09:30
  • @Kay I can only encourage you to please write your blog post on this recurring question. It is amazing how many times it comes up. – Ali Dec 20 '12 at 09:33
  • Yes, I agree not directly by simply integrating the accel vals. But, how about using accel vals to interpolate a function, a high order polynomial. Extensive experimentation will be required, but I am just wondering if such a technique could offset such huge sampling errors to get some valid results. My question is about that. Or a coefficient based simulation or something? – Sonu Jha Dec 20 '12 at 09:37
  • @Ali Yes :) At the moment I'm too busy in finishing my game. But when it will be launched I will do it and publish some new insights and special cases of motion detection I used in the game. – Kay Dec 20 '12 at 09:52
  • @Kay Looking forward to it, please let me know! – Ali Dec 20 '12 at 10:40
  • hey guys do you have any updates on that topic? how are things looks like now in 2017? any info will be useful (@Kay, Sonu) – oluckyman Feb 01 '17 at 18:40

1 Answers1

3

Although this does not fit StackOverflow, because it's not a question but a discussion, I'll try to sum up my thoughts about it.

As already said, the accelerometer is very inaccurate and you would need very good accuracy for this kind of task, especially if you are trying to measure such short distances. Plus, accelerometers differ from device to device, you will get different results for the same movements with different device. Plus a very huge random error.

My guess is, that you can get rid of a huge part of randomness/error by calibrating the device and making the "measurement move" a couple of times, like 10 times. After that you have enough data to get an average that might get close to the real value.

Calibration is a key part here, you have to think of a clever way to calibrate, like letting the user move the device over different distances in different speeds.

But all this is just theory. I would really like to see your results, but I doubt you get it working good enough even using the best possible filters/algorithms, since there is just too much noise.

jimpic
  • 5,360
  • 2
  • 28
  • 37