2

As part of an hyperparameter optimization routine, I use tensorflows high level api tf.contrib.learn.DNNRegressor(). The problem is that it creates for each instance an own new model_dir, which I can't delete for some reason during running the program (even after the model is overwritten and no longer in RAM). It is a problem because it consumes rather fast large amounts of disk storage. The question how can I delete the model_dir:

Here is some pseudo example:

import shutil
import tensorflow as tf

X = ... (some large input matrix)
y = ....(some large output vector)
for train, valid in KFoldCV(X, y):
    # res: dataframe of random hyper_parameters
    for idx, row in res.iterrows():
        print(row)

        x_1= tf.contrib.layers.real_valued_column("x_1")
        x_2= tf.contrib.layers.real_valued_column("x_2")

        # DNN specification
        optimizer = tf.train.ProximalGradientDescentOptimizer(
                learning_rate=float(row['learning_rate']),
                l2_regularization_strength=float(row[
                    'l2_regularization_strength']))

        model = tf.contrib.learn.DNNRegressor(
                hidden_units=[int(row['hidden_units'])] * int(
                    row['layers']),
                feature_columns=[x_1, x_2],
                optimizer=optimizer,
                dropout=float(row['dropout']))

       model.fit(input_fn=lambda: input_fn(train[0], train[1]),
                      steps=step)
       res['loss']= model.evaluate(....)
       shutil.rmtree(model.model_dir)

The last statement raises the following error at first:

 OSError: [WinError 145] The directory is not empty:

And if called a second time:

PermissionError: [WinError 5] Access is denied:

EDIT: it appears that TF event files allow for read-only during this process, some session in the backround blocking these files. Once I terminate the program, TF deletes these files by itself.

MMCM_
  • 617
  • 5
  • 18

0 Answers0