1

Recently I have been developing a CNN that learns to play the game of GO which I am developing in Tensorflow v0.11.

I read this paper : Teaching Deep Convolutional Neural Networks to Play Go in which they have enforced symmetry constraints on the convolution weights. I wished to implement that and test the results for myself. I have searched extensively and have been unable to find a good efficient way to do this. A similar question was asked here. However much I tried to see a way to enforce a way to enforce these symmetries while training I could not find a way.

Has anyone done something similar before?

Community
  • 1
  • 1

1 Answers1

2

One way to enforce symmetry is to store asymmetric weights and make a transformation to make them symmetric before using them. For example, if I want symmetry across the diagonal of a matrix, using (0.5 * weights * tf.transpose(weights)) will give me that. Other permutations will give you other types of symmetry, and these are all differentiable.

Alexandre Passos
  • 5,186
  • 1
  • 14
  • 19
  • My question to you is - When I am training the network what would the behaviour of the backpropogation be? Would it train the weights correctly? I will try this out none the less. I just wanted to know if you knew. – Aditya Rajagopal Dec 04 '16 at 03:56
  • Reparametrization tricks always do the right thing with backprop, at the expense of adding computation. – Alexandre Passos Dec 05 '16 at 05:08
  • I would sum rather than multiplying though. `0.5*(weights + tf.transpose(weights)`. Moreover if the matrices are batched with shape [batch_size, N,N] you can use `tf.linalg.transpose` that operates on batched matrices. – linello Jan 10 '19 at 10:22