I was looking at some code of a neural network written in python with numpy and I noticed that when passing the results of the neurons through the activation function:
def sigmoid(x):
return 1 / ( 1 + np.exp(-x))
Instead of passing the weights one by one that person called the function with an np.array as the argument!
Like so:
import numpy as np
def sigmoid(x):
return 1.0/(1.0 + np.exp(-x))
x = np.zeros((2,2))
print "Before\n",x
x = sigmoid(x)
print "After\n",x
To my surprise this worked. I have always called the activation function with a loop for each weight individually, since the function takes a single number as input.
output:
Before
[[ 0. 0.]
[ 0. 0.]]
After
[[ 0.5 0.5]
[ 0.5 0.5]]
I also tried some other operations and they seem to work fine.
Is this 'correct'? Are there cases where this doesn't work or are numpy arrays designed to do that? What I really want to know is, can I use this or am I better off doing what I was doing before?