0
np.arange(1,-1,-1) 

yields

array([1, 0])

as expected.

np.arange(1,-1,-0.5)
Out[8]: array([ 1. ,  0.5,  0. , -0.5]

Again all fine.

np.arange(1,-1,-0.2)
Out[10]: 
array([1.00000000e+00,   8.00000000e-01,   6.00000000e-01,
     4.00000000e-01,   2.00000000e-01,   2.22044605e-16,
    -2.00000000e-01,  -4.00000000e-01,  -6.00000000e-01,
    -8.00000000e-01])

What happened to the element that should be zero?

Same thing happens with

np.arange(1,-1,-0.1)
Out[11]: 
array([1.00000000e+00,   9.00000000e-01,   8.00000000e-01,
     7.00000000e-01,   6.00000000e-01,   5.00000000e-01,
     4.00000000e-01,   3.00000000e-01,   2.00000000e-01,
     1.00000000e-01,   2.22044605e-16,  -1.00000000e-01,
    -2.00000000e-01,  -3.00000000e-01,  -4.00000000e-01,
    -5.00000000e-01,  -6.00000000e-01,  -7.00000000e-01,
    -8.00000000e-01,  -9.00000000e-01])

This is definitely not desirable but is it expected behaviour?

user1654183
  • 4,375
  • 6
  • 26
  • 33

2 Answers2

1

Your zero elements are where they expected to be. They are just represented in a way, you do not expect.

2.22044605e-16 is a number written in scientific notation and equal to 0.000000000000000222044605 which is almost 0. You can read more about impressision of floating point numbers here

Community
  • 1
  • 1
Salvador Dali
  • 214,103
  • 147
  • 703
  • 753
1

The answer is that it is quite expected. The float number precision cannot be exact just because there is limited number of bits. I think this question has alredy appeared here and was answered for instance, here: https://stackoverflow.com/a/5160355/3115901

Community
  • 1
  • 1
pausag
  • 136
  • 8