0

I want to add regularization to my tf neural network:

I have tried the first solution (Lukazs's solution) of:

How to add regularizations in TensorFlow?

But then the compiler yield at me:

module 'tensorflow' has no attribute 'get_collection'?

How I need add this to the module? Is there another way to add regularization?

This is my relevant part of code:

def get_trained_model(X,y,hidden_size_list, steps, lambdaa = 0):
    model = keras.models.Sequential()
    model.add(keras.layers.Flatten(input_shape = (X.shape[1],)))
    for hs in hidden_size_list:
        model.add(keras.layers.Dense(hs, activation = 'relu'))
    model.add(keras.layers.Dense(2))
    
    my_normal_loss = keras.losses.SparseCategoricalCrossentropy(from_logits = True)
    reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
    reg_constant = lambdaa  # Choose an appropriate one.
    loss = my_normal_loss + reg_constant * sum(reg_losses)

    optim = keras.optimizers.Adam(learning_rate = 0.001) #lrening_rate
    metrics = ["accuracy"]
    model.compile(loss = loss, optimizer = optim, metrics = metrics)
    batch_size = X.shape[0]
    model.fit(X, y, batch_size = batch_size, epochs = steps, shuffle = True, verbose =1)

    return mode

This is the error:

AttributeError                            Traceback (most recent call last)
Input In [2], in <cell line: 1>()
----> 1 nuearal_network_1(train_data, test_data, [20,20,20,20,20], 0)

File ~\machine_learning\kaggle_competitions\spaceship-titanic\final_code\submissions_creation.py:125, in nuearal_network_1(train_data, test_data, hidden_size_list, lambdaa, save_name, submission_ex)
    122 save_name += "/lambdaa_" + str(lambdaa)
    124 steps = 3000
--> 125 model = get_trained_model(X,y,hidden_size_list, steps, lambdaa)
    126 predictions = get_prediction(x_test, model)
    128 data = pd.read_csv(submission_ex)

File ~\machine_learning\kaggle_competitions\spaceship-titanic\final_code\neural_network.py:15, in get_trained_model(X, y, hidden_size_list, steps, lambdaa)
     12 model.add(keras.layers.Dense(2))
     14 my_normal_loss = keras.losses.SparseCategoricalCrossentropy(from_logits = True)
---> 15 reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
     16 reg_constant = lambdaa  # Choose an appropriate one.
     17 loss = my_normal_loss + reg_constant * sum(reg_losses)

AttributeError: module 'tensorflow' has no attribute 'get_collection'
lior1zh2000
  • 103
  • 1
  • 10

1 Answers1

2
from tensorflow.keras import layers
from tensorflow.keras import regularizers

layer = layers.Dense(
    units=64,
    kernel_regularizer=regularizers.L1L2(l1=1e-5, l2=1e-4),
    bias_regularizer=regularizers.L2(1e-4),
    activity_regularizer=regularizers.L2(1e-5)
)

https://keras.io/api/layers/regularizers/

You can refer to this link for adding the regularization for the layers. Keras has the inbuilt regularization function for use.

  • From what I knew the regularization parameter is one parameter for all the neural network. Can you explain me mathematicly what the meaning of regularization parameter per layer? I mean, in this case how the cost function look like? – lior1zh2000 Jun 10 '22 at 07:43
  • 1
    @lior1zh2000 the regularization function can be used for each layer as well as for the entire model at once also. If it is taken per layer, for every layer it will calculate the absolute value (L1 regularization) or squared value(L2) and add that to the loss function which would force the model to reduce the weights if it is high and try to reduce the loss for that particular layer. I hope this would answer you. If not, I'll try to share an example of how recently I have used that. – Ninad Kulkarni Jun 10 '22 at 13:56
  • I would very thankful for sharing your example. I understood the the math consept of regularization per layer and thankyou for that explenation. But there is one code issue I dont understand here. At the end I will need to compile my model by '''model.compile(loss = ...)'''. Whitch loss I need compile with if my layers already have regualrizations? The normal keras.losses.SparseCategoricalCrossentropy(from_logits = True)? Can you please share your example for regularization NN it would help me a lot. – lior1zh2000 Jun 10 '22 at 14:08
  • ```model.compile( loss='sparse_categorical_crossentropy', optimizer="adam", metrics=['accuracy'])``` You can add sparse categorical cross entropy as the loss component. ```print(tf.math.reduce_sum(layer.losses))``` also for the layer losses you can use this to check each layer's loss – Ninad Kulkarni Jun 13 '22 at 05:53
  • Is your question answered or are you still facing the issue? – Ninad Kulkarni Jun 16 '22 at 11:10
  • No. I my code I am using lambdaa regularization parameter: ################# reg_losses =tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) reg_constant = lambdaa # Choose an appropriate one. loss = my_normal_loss + reg_constant * sum(reg_losses) ############ But in the code you just writed (in the last comment) there is no place for regularization parameter. – lior1zh2000 Jun 19 '22 at 16:40
  • Can you share your code eample? – lior1zh2000 Jun 19 '22 at 16:55
  • As per my knowledge, you don't need to add the individual loss for compilation. Once you add the loss parameter in ```model.compile()``` as mentioned in my previous component, it automatically adds the layers loss in it and you don't need to add it separately. But also for your reference you can still refer the keras website (added in the main solution) or maybe this also can be useful for you https://machinelearningmastery.com/how-to-reduce-overfitting-in-deep-learning-with-weight-regularization/ – Ninad Kulkarni Jun 20 '22 at 04:58
  • So the loss I need is 'sparse_categorical_crossentropy' OR tf.keras.losses.SparseCategoricalCrossentropy(from_logits = True) and what is the diffrence? – lior1zh2000 Jun 20 '22 at 05:08
  • ```from_logits = True``` is generally used when you don't use the softmax function in the output layer. It states that the loss obtained from the model is not normalized. By default, all the losses from the model use ```from_logits=False```. So, when you don't include this in the parameter, it is by default false. – Ninad Kulkarni Jun 20 '22 at 13:01
  • I am sharing one practical example I previously read while understanding this concept. Maybe that will help you more: https://stackoverflow.com/questions/57253841/from-logits-true-and-from-logits-false-get-different-training-result-for-tf-loss#:~:text=from_logits%20%3D%20True%20signifies%20the%20values,softmax%20function%20in%20our%20model. – Ninad Kulkarni Jun 20 '22 at 13:04
  • Thank for your help. I know understand how to do regularization in NN tf. – lior1zh2000 Jun 21 '22 at 20:11