3

I wanted to implement an RBFN and found this code on StackOverflow itself. While I do understand some of the code, I do not understand what gamma, kwargs, and the entire call function. Can someone please explain it to me?

from keras.layers import Layer
from keras import backend as K

class RBFLayer(Layer):
    def __init__(self, units, gamma, **kwargs):
        super(RBFLayer, self).__init__(**kwargs)
        self.units = units
        self.gamma = K.cast_to_floatx(gamma)
def build(self, input_shape):
    self.mu = self.add_weight(name='mu',
                              shape=(int(input_shape[1]), self.units),
                              initializer='uniform',
                              trainable=True)
    super(RBFLayer, self).build(input_shape)

def call(self, inputs):
    diff = K.expand_dims(inputs) - self.mu
    l2 = K.sum(K.pow(diff,2), axis=1)
    res = K.exp(-1 * self.gamma * l2)
    return res

def compute_output_shape(self, input_shape):
    return (input_shape[0], self.units)
Innat
  • 16,113
  • 6
  • 53
  • 101
Jerry
  • 366
  • 4
  • 22

1 Answers1

2

Gamma: According to the doc: the gamma parameter defines how far the influence of a single training example reaches, with low values meaning ‘far’ and high values meaning ‘close’. The behavior of the model is very sensitive to the gamma parameter. When gamma is very small, the model is too constrained and cannot capture the complexity or “shape” of the data. It's a hyper-parameter.

kwargs: **kwargs is used to let the functions take an arbitrary number of keyword arguments. Details.

Call: In the call function, you're calculating the radial basis function kernel, ie. RBF kernel, defined as follows.

source.

The calculation of numerator part:

diff = K.expand_dims(inputs) - self.mu
l2 = K.sum(K.pow(diff,2), axis=1)

The calculation of denominator part:

res = K.exp(-1 * self.gamma * l2)

The self.gamma can be expressed as follows

Innat
  • 16,113
  • 6
  • 53
  • 101