Reading about neural evolution in some ppt
presentation I came across a phrase:
network output is calculated the standard way
I successfully implemented a simple feedforward mechanism following some guides (using vector representation of weights - 1, 2, 3) and I understand (more or less) how recurrent networks could be calculated.
What I couldn't find is how would a neural network with arbitrary topology be calculated. Is there any 'standard way' (algorithm)?
I imagine one way (assuming feedforward topology), though very time consuming, would be to loop through all neurons until output is calculated.
I imagine another method could be organizing arbitrary topology into layers (also assuming feedforward topology - this?) and then calculating it.
QUESTIONS
What is the 'standard way' to calculate arbitrary topology network output? / How to calculate arbitrary topology network output?
ASSUMPTIONS
- A feedforward topology (recurrent topology as a bonus, probably much more complicated).
- Bias node present.
PS. I'm working with Python
, following NEAT paper.