For my project i write a not-fully connected feedforword network using tensorflow.
I only use scalar variables to generate weights, rather than matrix variables, because of not-fully connectivity.
For this reason i get the flexibility to connet neurons from two neighouring layers respectively.
Unfortunately the training of this network take too much storages. enter image description here Also the training failed,if the network has more layers (i have an iteration to define the number of layers)
I have released my code on github: https://github.com/hezhouyuanren/BP-like-Decoder.git . I'm hopping for your help