With regards to efficiency, how can we create a large numpy array where the values are float numbers within a specific range.
For example, for a 1-D numpy array of fixed size where the values are between 0 and 200,000,000.00 (i.e. values in [0, 200,000,000.00]), I can create the array using the smallest data type for floats (float16
) and then validate any new value (from user input) before inserting it to the array:
import numpy as np
a = np.empty(shape=(1000,), dtype=np.float16))
pos = 0
new_value = input('Enter new value: ')
# validate
new_value = round(new_value, 2)
if new_value in np.arange(0.00, 200000000.00, 0.01):
# fill in new value
a[pos] = new_value
pos = pos + 1
The question is, can we enforce the new_value
validity (in terms of the already-known minimum/maximum values and number of decimals) based on the dtype
of the array?
In other words, the fact that we know the range and number of decimals on the time of creating the array, does this gives us any opportunity to (more) efficiently insert valid values in the array?