It appears you're familiar with the problems of binary floating point accuracy, but anybody who isn't should read Is floating point math broken?
Converting from a type with high accuracy to one with lower accuracy involves rounding. The rules for rounding binary floating point are well established, and always try to deliver the closest representable value to the one you started with. It's quite easy for two 64-bit values to round to the same 32-bit value, because the intervals between consecutive values are so much wider.
For your examples you can see that they're ever so slightly different, one is too low and the other is too high. But look at how close they are!
>>> x = np.float64(0.0003)
>>> y = np.float64(0.0001 * 3)
>>> f'{x:.65f}'
'0.00029999999999999997371893933895137251965934410691261291503906250'
>>> f'{y:.65f}'
'0.00030000000000000002792904796322659422003198415040969848632812500'
When you round them to float32 they become identical because of the necessary rounding applied.
>>> x = x.astype(np.float32)
>>> f'{x:.65f}'
'0.00030000001424923539161682128906250000000000000000000000000000000'
>>> y = y.astype(np.float32)
>>> f'{y:.65f}'
'0.00030000001424923539161682128906250000000000000000000000000000000'
When you see the closest alternatives they could have chosen, it's easy to see why they rounded the way they did.
>>> one = np.float32(1.0)
>>> f'{np.nextafter(x, one):.65f}'
'0.00030000004335306584835052490234375000000000000000000000000000000'
>>> f'{np.nextafter(x, -one):.65f}'
'0.00029999998514540493488311767578125000000000000000000000000000000'