0

Suppose I want to construct an array in Python/numpy using the r_ operator like so.

>>> import numpy as np
>>> np.r_[0.02:0.04:0.01]
array([ 0.02,  0.03])
>>> np.r_[0.04:0.06:0.01]
array([ 0.04,  0.05])

Both cases work as expected. If I change the limits though:

>>> np.r_[0.03:0.05:0.01] #?????
array([ 0.03,  0.04,  0.05])

Why does this happen? Is it something to do with an inexact floating point representations? Or is this a bug?

Eric
  • 95,302
  • 53
  • 242
  • 374
rs1223
  • 25
  • 5
  • The linked answer does not mention `np.r_` at all. That answer is relevant because `np.r_` can use `arange`, but it isn't an exact duplicate. Lucky I got my answer in just under the wire. :) – hpaulj May 22 '16 at 21:02

1 Answers1

0

With a complex 'step' this uses linspace:

In [68]: np.r_[0.02:.04:3j]
Out[68]: array([ 0.02,  0.03,  0.04])

In [69]: np.r_[0.03:.05:3j]
Out[69]: array([ 0.03,  0.04,  0.05])

With the float 'step' it uses arange, which has a note that results can be inconsistent with non integer steps. It recommends linspace for more control.

np.mgrid also accepts the psuedo-complex step notation.

Look in /usr/lib/python3/dist-packages/numpy/lib/index_tricks.py for more details on how these classes work.

hpaulj
  • 221,503
  • 14
  • 230
  • 353