1

I have made a neural network from scratch and want to make it run a little bit faster. I'm wondering if vectorizing my forward prop would make it faster. My current forward prop code is:

def forwardProp(self, inputs):
    for i in range (self.dimensions[1]):
        self.secondLayerNeurons[i] = self.relu(np.dot(self.firstLayerWeights[i], inputs)+self.firstLayerBiases[i])
    for i in range (self.dimensions[2]):
        self.outputNeurons[i] = self.sigmoid(np.dot(self.secondLayerWeights[i], self.secondLayerNeurons)+self.secondLayerBiases[i])

If vectorization will make this faster, how would I vectorize this? Thanks in advance!

Joey
  • 21
  • 6

2 Answers2

0

I'm wondering if vectorizing my forward prop would make it faster

Yes!

How would I vectorize this?

You want to eliminate loops and figure out vector algebra that will do the same thing.

Let's say self.firstLayerWeights.shape is (N, D). You want to calculate the row-wise dot product of this matrix with inputs. Let's say you implement this logic in a function called rowwise_dot

def rowwise_dot(inpA, inpB):
    # Calculate and return rowwise dot

Now you have the rowwise_dot function, you can add the entire vector of self.firstLayerBiases without having to loop,

rowwise_dot(self.firstLayerWeights, inputs) + self.firstLayerBiases

Next, make sure self.relu and self.sigmoid can take vectors and return whatever you need them to for each element of the vector. This might involve similar shenanigans as vectorizing the rowwise dot product.

So in the end you have:

def forwardProp(self, inputs):
    self.secondLayerNeurons = self.relu(rowwise_dot(self.firstLayerWeights, inputs) + self.firstLayerBiases)
    self.outputNeurons = self.sigmoid(rowwise_dot(self.secondLayerWeights, self.secondLayerNeurons) + self.secondLayerBiases)
Pranav Hosangadi
  • 23,755
  • 7
  • 44
  • 70
-1

Vectorizing forward prop makes the MLP run a lot faster. I used the @ operator.

    def forwardProp(self, inputs):
        self.secondLayerNeurons = self.sigmoid(self.w1 @ inputs + self.b1)
        self.outputNeurons = self.sigmoid(self.w2 @ self.secondLayerNeurons + self.b2)
Joey
  • 21
  • 6