I am implementing Artificial Neural Network in C++ for my Machine Learning course. Here's my code for "Neuron" -
struct Neuron
{
float *inputs; // pointer to inputs array
int inputCount; // number of input
float *weights;
float sum; // linear weight sum of inputs
float output = 0.0; // output from activation function
float bias = 1.0;
float delta; // error gradient
Neuron(){}
Neuron(int inputCount)
{
this->inputCount = inputCount;
this->weights = new float [inputCount];
InitializeWeights();
}
float ActivationFunction(float x)
{
return 1.0 / (1.0 + exp(-x));
}
void Calculate(float *inputs)
{
this->inputs = inputs;
float temp = 0.0;
for(int i=0; i<inputCount; i++)
{
temp += weights[i] * inputs[i];
}
temp += bias;
sum = temp;
output = ActivationFunction(sum);
}
void InitializeWeights()
{
for(int i=0; i<inputCount; i++)
{
weights[i] = rand() % 101 / 100;
}
}
~Neuron()
{
delete[] weights;
}
};
I also have another struct called "Layer" which represents a layer. The neurons are initialized there as -
for(int i=0; i<neuronCount; i++)
{
neurons[i] = Neuron(inputCount);
}
where "neuronCount" is the number of neurons in the layer. The problem is that the weight array in the neurons is immediately deallocated because of destructor calling. What should I do to prevent this? More importantly, is there any better way to design my program?