I'm implementing a NN library and I wrote the softmax activation function like such:
def function(self, z):
maxValue = np.max(z)
e_x = np.exp(z - maxValue)
returnValue = e_x / e_x.sum(axis=0)
return returnValue
as you can see, it's a regular softmax. I thought using maxValue would be enough to make the values in the right range, but they still output values which are so small, in the division, python can't handle them so it converts them to NaN. That messes up my program and the model doesn't learn because the output is NaN. How do you suggest fixing this? I tried using bigFloat but it seems it isn't compatible with numpy arrays.