There are a couple of things going on here, and not everything is what it appears, so let's separate everything out.
Float Approximation
As both @COLDSPEED and @Eric alluded to, when you have a floating point number, it's only an approximation of the "real" value that you intended to store. The reason for this fact is because computers store numbers in binary - base 2 representation - and 100.12 is thus 100 and 12/100. The integer portion is easy to represent using positive powers of 2, but the fractional portion has no exact representation in base 2 (you can see this by using Wolfram Alpha and running the query "12/100 base 2"). So, to store 100.12, the computer has to approximate 100 using powers of 2 (easy) and 12/100 using powers of 2 (impossible to do exactly), and it uses either 32 or 64 bits (32 or 64 slots corresponding to powers of 2) to do that. Everything beyond that is truncated, so the approximate value of 12/100 that is stored is not exact. The more bits in the representation, the closer the approximation is, and you could get arbitrarily close using arbitrarily more bits, but you'll never get it exactly.
Float32 vs Float64 Approximation
Depending on how many bits are used to store each floating point number, 32 or 64, you'll get a better or worse approximation. You can see this by asking Python to print out 50 digits of each number (50 is too many, but just to illustrate - we'll get to printing next):
In [2]: print("%.50f"%(np.array([100.12],dtype=np.float32)[0]))
100.12000274658203125000000000000000000000000000000000
In [3]: print("%.50f"%(np.array([100.12],dtype=np.float64)[0]))
100.12000000000000454747350886464118957519531250000000
Intermediate Calculations
Doing intermediate calculations can also change the final binary representation. Here's an example of that, comparing 100.12 to 100.02 + 0.10:
In [4]: print("%.50f"%(100.12))
100.12000000000000454747350886464118957519531250000000
In [5]: print("%.50f"%(100.02+0.10))
100.11999999999999033661879366263747215270996093750000
In the first case, Python is creating an approximation for 12/100 using powers of 2. In the second case, Python is creating an approximation for 2/100 using powers of two, then another approximation of 1/10 using powers of two, then combining those two representations, leading to a different approximation.
Print Representation
Another layer of approximation that's going on here is, when Numpy/Python prints 100.12000275, it's only printing that as part of the string representation of a Numpy array, which in turn uses a string representation of each element of the array. So, don't assume that printing an array gives you the "absolute" version of what the computer sees in that array. If you pull out that particular value and print it out using a print formatting string, you'll see there are more decimal points:
In [7]: array_structured
Out[7]:
array([(0, 100.12000275), (0, 0. )],
dtype=[('index', '<i4'), ('price', '<f4')])
In [8]: print("%.50f"%(array_structured['price'][0]))
100.12000274658203125000000000000000000000000000000000
However, I should point out that, because of the base-2 representation, printing 50 decimal places corresponds to printing more bits of accuracy than are actually stored by Python, so at some point even the number above is just an approximation of the approximation.
Compare this to the case where you use np.float64
: again, you see the layers of representation. Printing the array makes it look like you now have "exactly" 100.12, but using print formatting reveals that you only have a closer approximation:
In [10]: array_structured = np.zeros(2, dtype=[('index', np.int32),('price', np.float64)])
In [11]: array_structured['price'][0] = 100.12
In [12]: array_structured
Out[12]:
array([(0, 100.12), (0, 0. )],
dtype=[('index', '<i4'), ('price', '<f8')])
In [13]: print("%.50f"%(array_structured['price'][0]))
100.12000000000000454747350886464118957519531250000000
And again, this is just an approximation of an approximation - Python is turning the base 10 number "100.12" into a base 2 representation to store its value, and when you print it out it turns that base 2 representation back into a base 10 representation.
Equality Check
Your equality check doesn't account for these multiple layers of representation because of the nature of the == operator. You are interpreting it as a mathematical equals operator, as in, 2 + 2 = 4. However, because of the binary representation of the floating point numbers, this doesn't work the way you would expect. (What I mean is, even if it did work, it would not work because it is actually checking if "100.12" equals the value of 100.12 in the array, it is checking if the computer representation of the thing on the left equals the computer representation of the thing on the right.)
To check if two numbers are equal, don't use ==, use math.isclose
or compare the absolute difference of the two numbers:
In [18]: a = 100.12
In [19]: b = 100.02 + 0.10
In [20]: import math
In [21]: a==b
Out[21]: False
In [22]: math.isclose(a,b)
Out[22]: True
In [25]: abs(a-b)<1e-10
Out[25]: True
Your question certainly is a rabbit hole into the guts of Python... Hope you enjoyed this little excursion.