1

I am trying to implement this paper for sound classification: https://raw.githubusercontent.com/karoldvl/paper-2015-esc-convnet/master/Poster/MLSP2015-poster-page-1.gif
It is mentioned in the paper that 0.001 L2 weight decay was added for each layer. But, I can't figure out how to do so in Tensorflow.

I found a similar question (How to define weight decay for individual layers in TensorFlow?) that uses tf.nn.l2_loss, but it is not clear how can I use this approach for my network. Also, it does not have the parameter of 0.001 in tf.nn.l2_loss.

My Network:

net = tf.layers.conv2d(inputs=x, filters=80, kernel_size=[57, 6], strides=[1, 1], padding="same", activation=tf.nn.relu)
print(net)
net = tf.layers.max_pooling2d(inputs=net, pool_size=[4, 3], strides=[1, 3])
print(net)
net = tf.layers.dropout(inputs=net, rate=keep_prob)
print(net)
net = tf.layers.conv2d(inputs=net, filters=80, kernel_size=[1, 3], strides=[1, 1], padding="same", activation=tf.nn.relu)
print(net)
net = tf.layers.max_pooling2d(inputs=net, pool_size=[1, 3], strides=[1, 3])
print(net)
net = tf.layers.flatten(net)
print(net)
# Dense Layer
net = tf.layers.dense(inputs=net, units=5000, activation=tf.nn.relu)
print(net)
net = tf.layers.dropout(inputs=net, rate=keep_prob)
print(net)
net = tf.layers.dense(inputs=net, units=5000, activation=tf.nn.relu)
print(net)
net = tf.layers.dropout(inputs=net, rate=keep_prob)
print(net)
logits = tf.layers.dense(inputs=net, units=num_classes)
print("logits: ", logits)

Output:

Tensor("Model/conv2d/Relu:0", shape=(?, 530, 129, 80), dtype=float32)
Tensor("Model/max_pooling2d/MaxPool:0", shape=(?, 527, 43, 80), dtype=float32)
Tensor("Model/dropout/Identity:0", shape=(?, 527, 43, 80), dtype=float32)
Tensor("Model/conv2d_2/Relu:0", shape=(?, 527, 43, 80), dtype=float32)
Tensor("Model/max_pooling2d_2/MaxPool:0", shape=(?, 527, 14, 80), dtype=float32)
Tensor("Model/flatten/Reshape:0", shape=(?, 590240), dtype=float32)
Tensor("Model/dense/Relu:0", shape=(?, 5000), dtype=float32)
Tensor("Model/dropout_2/Identity:0", shape=(?, 5000), dtype=float32)
Tensor("Model/dense_2/Relu:0", shape=(?, 5000), dtype=float32)
Tensor("Model/dropout_3/Identity:0", shape=(?, 5000), dtype=float32)
logits:  Tensor("Model/dense_3/BiasAdd:0", shape=(?, 20), dtype=float32)

I found the implementation of this paper: https://github.com/karoldvl/paper-2015-esc-convnet/blob/master/Code/_Networks/Net-DoubleConv.ipynb But, it's in pylearn2.

How can I add 0.001 L2 weight decay in my code?

Drise
  • 4,310
  • 5
  • 41
  • 66
Beginner
  • 1,202
  • 2
  • 20
  • 29

1 Answers1

3

To add regularization to conv2d layers you use kernel_regularizer parameter i.e to implement for your network with 0.001 loss

net = tf.layers.conv2d(inputs=x, filters=80, kernel_size=[57, 6], strides=[1,1], padding="same", activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.001))
Mohamed Elzarei
  • 515
  • 3
  • 20