0

I would like to know what is the best way to overcome the unexpected behavior of numpy.arange() when floats are involved. In particular, I have identified 2 problems in the following example:

I need to discretize my time domain from 0 up until t_max (inclusive), with time steps dt. For that I define these two variables and redefine dt so the time vector is uniformly space, as follows.

t_max = 200    
dt = 0.07 
dt = t_max/np.ceil(t_max/dt)                  # dt = 0.06997900629811056
t = np.arange(0, t_max+dt, dt, dtype=float)   # so to have t = [0, 0.0699..., ..., 200]

The 1st problem is that t[-1] = 199.99999999999997, because t_max/dt = 2858.0000000000005 (dt redefined), and not 2858.0 as expected.

The 2nd problem is that related to the use of floats, in this case dt. I usually use the suggestion in np.arange does not work as expected with floating point arguments. I arbitraly chose 0.5 because it is half-way through not getting t_max (that I want) and getting t_max+dt (that I don't want), but this seems a bit odd.

t = np.arange(0, t_max/dt+0.5, 1, dtype=float)*dt

In Matlab all this seems to work with a simple t = 0:dt:t_max;.

In my case, this is an issue because logical tests like in the case below will fail when I expect them to return true. Most of the times it is handy to work with x_max, but the real figure is x[-1] (as all the code is based on my time domain).

x_max = v*t_max
x = v*t
x_max == x[-1]   # False

I refer to Python: Range or numpy Arange with end limit include, where a good alternative is given. However, the function provided does not handle my dt = 0.06997900629811056 (it just gets 0.07), so I am a bit concerned about using it.

J. Serra
  • 440
  • 1
  • 4
  • 13
  • According to the [documentation](https://numpy.org/doc/stable/reference/generated/numpy.arange.html), it **is** the expected behavior. *When using a non-integer step, such as 0.1, the results will often not be consistent. It is better to use numpy.linspace for these cases.*. – Quang Hoang Mar 25 '21 at 15:36
  • @QuangHoang, I know the 2nd issue is documented and I can use numpy.linspace, but then I am concerned the time steps my differ from my calculated dt. – J. Serra Mar 25 '21 at 15:41
  • 1
    Does `total = np.ceil(t_max/dt); t=np.linspace(0,t_max, int(total)+1)` work for you? – Quang Hoang Mar 25 '21 at 15:48
  • That is what I'm doing with ```dt = t_max/np.ceil(t_max/dt) ```. I could use ```np.ceil(t_max/dt)``` to get the number of divisions in ```numpy.linspace```, but then I may transfer my problem involving non-matching ```t_max``` vs ```t[-1]``` to non-matching ```dt``` vs ```(spacing in linspace array)```. – J. Serra Mar 25 '21 at 15:49
  • In regards to the first problem, the reason you are getting ```2858.0000000000005``` and not ```2858.0``` is because of floating point precision. I think by default the division will round off to a 64 bit float. Running ```np.divide(t_max, dt, dtype=np.float32)``` you will see that it does equal ```2858.0```. – Kevin Mar 25 '21 at 15:49
  • You said the problem is documented, but you don't seem to follow the suggestion. `linspace` works pretty well, `t[-1]` is exactly `200`. – Quang Hoang Mar 25 '21 at 15:50
  • I was trying not to do many changes in my code and keeping the arange's. I agree with you @QuangHoang, that linspace is the alternative. Also having the exact ```t_max``` is more critical than the exact ```dt```. @Kevin also added a valid point which may solve may issues. – J. Serra Mar 25 '21 at 15:58
  • 2
    Python, and numpy especially, do no support arbitrary precision floats. That means that if your spacing can not be expressed exactly in 53 bits, you can either round the spacing or the bounds. The decision is up to you. There is no magic in this world. – Mad Physicist Mar 25 '21 at 20:45
  • 1
    I've read the comments, and I think they don't express strongly enough that this is not a wise approach overall. Floating point numbers generally don't line up to the exact bit, and relying on quirks of `arange` vs `linspace`, etc, is just going to lead to frustration and errors. And these quirks aren't generally guaranteed, so even if you get it to work for your test using `0.5`, it may not work with `0.51`; or it may work now but not after your next update; etc. – tom10 Mar 26 '21 at 17:09

0 Answers0