0

I use OpenNN to develop a Neural Network for a Regression task. I have trouble with the layers of my Neural Network. I cloned from the master branch of OpenNN a couple of weeks ago. Every time I try to add a layer, my program crashes without giving an error message. I am currently implementing a Neural Network for a Regression problem, so I took a look at the yacht_hydrodynamics_design example from OpenNN. But after copying this code to my code, I encounter this problem. So far I tried adding a Scaling layer and an Upscaling layer, but none of these work. This is my code so far:

bool NNetwork::preparationForTraining(const string& filedata) {
int inputLayerSize = 5;
int outputLayerSize = 1;
int hiddenLayerSize = round(sqrt((inputLayerSize * inputLayerSize) + (outputLayerSize * outputLayerSize)));
int layers = 3;
try {
    Tensor<Index, 1> neural_network_architecture(layers);
    neural_network_architecture.setValues({inputLayerSize, hiddenLayerSize, outputLayerSize});
    neuralnetwork = NeuralNetwork(NeuralNetwork::Approximation, neural_network_architecture);     
}
catch(...) {
    cerr << "Failed to initialize Neural Network" << endl;
    return false;
}


try {
    dataset = DataSet(filedata, ';', true);
}
catch(...) {
    cerr << "Can not read Feature File" << endl;
  return false;
}
if (dataset.get_input_variables_number() != inputLayerSize) {
    cerr << "Wrong size of input layer" << endl;
    return false;
}

if (dataset.get_target_variables_number() != outputLayerSize) {
    cerr << "Wrong size of output layer" << endl;
    return false;
}

//prepare Dataset
//get the information of the variables, such as names and statistical descriptives
Tensor<string, 1> inputs_names = dataset.get_input_variables_names();
Tensor<string, 1> targets_names = dataset.get_target_variables_names();

//instances are divided into a training, a selection and a testing subsets
dataset.split_samples_random();

//get the input variables number and target variables number
Index input_variables_number = dataset.get_input_variables_number();
Index target_variables_number = dataset.get_target_variables_number();

//scale the data set with the minimum-maximum scaling method
Tensor<string, 1> scaling_inputs_methods(input_variables_number);
scaling_inputs_methods.setConstant("MinimumMaximum");
Tensor<Descriptives, 1> inputs_descriptives = dataset.scale_input_variables(scaling_inputs_methods);

Tensor<string, 1> scaling_target_methods(target_variables_number);
scaling_target_methods.setConstant("MinimumMaximum");
Tensor<Descriptives, 1> targets_descriptives = dataset.scale_target_variables(scaling_target_methods);

//prepare Neural Network

//introduce information in the layers for a more precise calibration
neuralnetwork.set_inputs_names(inputs_names);
neuralnetwork.set_outputs_names(targets_names);
cout << "inputs names: " << inputs_names << endl;
cout << "targets names: " << targets_names << endl;

//add scaling layer to neural network
ScalingLayer* scaling_layer_pointer = neuralnetwork.get_scaling_layer_pointer(); //Program crashes here
scaling_layer_pointer->set_scaling_methods(ScalingLayer::MinimumMaximum);
scaling_layer_pointer->set_descriptives(inputs_descriptives);

//add the unscaling layer to neural network
UnscalingLayer* unscaling_layer_pointer = neuralnetwork.get_unscaling_layer_pointer();
unscaling_layer_pointer->set_unscaling_methods(UnscalingLayer::MinimumMaximum);
unscaling_layer_pointer->set_descriptives(targets_descriptives);

return true;
}

As you can see, I have a class named NNetwork which is constructed like this (header-file):

using namespace OpenNN;
using namespace Eigen;
namespace covid {
class NNetwork {
public:
    explicit NNetwork();
    ~NNetwork() = default;
    bool preparationForTraining(const string& filedata);
    bool training();
    bool testing();
    bool predict(const string &filedata, std::vector<double> &prediction);
    bool loadNN();
private:
    OpenNN::NeuralNetwork neuralnetwork;
    OpenNN::DataSet dataset;
};
}

When I deleted the last 6 lines of code in the function preparationForTraining, the program continues until the next crash happens in the function training, which gets called right after preparationForTraining:

bool NNetwork::training() {
    //set the training strategy, which is composed by Loss index and Optimization algorithm
    // Training strategy object
    TrainingStrategy training_strategy(&neuralnetwork, &dataset); //Program crashes here next
    training_strategy.set_loss_method(TrainingStrategy::NORMALIZED_SQUARED_ERROR);
    training_strategy.set_optimization_method(TrainingStrategy::ADAPTIVE_MOMENT_ESTIMATION);

    // optimization
    AdaptiveMomentEstimation* adam = training_strategy.get_adaptive_moment_estimation_pointer();
    adam->set_loss_goal(1.0e-3);
    adam->set_maximum_epochs_number(10000);
    adam->set_display_period(1000);

    try {
        // start the training process
        const OptimizationAlgorithm::Results optimization_algorithm_results = training_strategy.perform_training();
        optimization_algorithm_results.save("E:/vitalib/vitalib/optimization_algorithm_results.dat");
    }
    catch(...) {
        return false;
    }
    return true;
}

I have the feeling that I missed something, maybe a critical line of code or something similar. It would be nice If anybody with experience with OpenNN could help me.

Update: I moved the whole code from the function preparationForTraining into the main, and now the program does not crash. But this is not what I am looking for because I would rather do it in a function.

ano
  • 17
  • 1
  • 10
  • Can you put a check in your code for if 'scaling_layer_pointer' is null ? I think you are dereferencing a Null pointer. – user3389943 Jun 03 '21 at 14:37
  • @user3389943 I just checked, and it looks like the program crashes when I initialize scaling_layer_pointer so I can not check if it is a nullpointer. – ano Jun 03 '21 at 15:13
  • @user3389943 My previous comment was wrong, scaling_layer_pointer is indeed a nullpointer. – ano Jun 03 '21 at 17:17

0 Answers0