0

I have a factorMatrix of type INDArray which is xIn

It contains of 5 columns [A, B, C, D, E]

and another INDArray with 1 column [y] which is yIn

For the ones who are familiar with MatLab I could use now:

modelTrain = fitlm([XInSample yInSample] , 'linear') and then 

retPredictionRegress = predict(modelTrain , XInSample);

I have issues to setup DL4J with xIn and yInd even when I found something that claims to be helpful - but at least not to me.

Could anyone please bring me on track?

Anna Klein
  • 1,906
  • 4
  • 27
  • 56

1 Answers1

1

Linear regression in dl4j is just a neural network with a specific loss function and output later type. This follows how you would do linear regression in any other deep learning framework. This concept/idea isn't really specific to dl4j itself. It's following conventions you'd find in any instantation of the regression problem with a more general purpose framework.

I see you commented here as well: DL4J linear regression

Replying to your "at least not helpful to me" , do you mind clarifying what you have issue with in the question? That would help a bit.

Answering your question, you'd just declare a neural network with your 5 inputs and declare how many ever inputs you have. You're allowed to declare more than 1 output if you find the objective to be suitable. In your case it's 1 output.

An example of this would be:

 //Create the network
        int numInput = 5;
        int numOutputs = 1;
        int nHidden = 10;
        MultiLayerNetwork net = new MultiLayerNetwork(new NeuralNetConfiguration.Builder()
                .seed(seed)
                .weightInit(WeightInit.XAVIER)
                .updater(new Nesterovs(learningRate, 0.9))
                .list()
                .layer(0, new DenseLayer.Builder().nIn(numInput).nOut(nHidden)
                        .activation(Activation.TANH) //Change this to RELU and you will see the net learns very well very quickly
                        .build())
                .layer(1, new OutputLayer.Builder(LossFunctions.LossFunction.MSE)
                        .activation(Activation.IDENTITY)
                        .nIn(nHidden).nOut(numOutputs).build())
                .build()
        );

The above snippet comes from the dl4j examples.

In this case, you would be working on tuning the neural network with the appropriate output. I would suggest doing some broader reading on the topic if you want to understand the relationship between neural nets and regression.

Adam Gibson
  • 3,055
  • 1
  • 10
  • 12
  • Sure and ty for your explanation. The example is confusing because when I check the input and test data I see no labers. So far I understood ML there should be a decent label which tells the ML algorithm to classify a result. In my case each xIn in a row maps to its corresponding yIn => (which is either positiv or negative) - so it is not clear to me how ML can map when there is no label to map? I think it is the same what the other inquirer asked for. – Anna Klein Jul 17 '20 at 07:38
  • Hi, you would map your output targets as the outputs for the model to learn. The "label" is just the real numbered output. Of note here is you should make sure to normalize both your inputs and your outputs. in dl4j itself, you would then want to make sure to rescale the output to be the "real" numbers after you are done training. – Adam Gibson Jul 17 '20 at 11:26