Linear regression in dl4j is just a neural network with a specific loss function and output later type. This follows how you would do linear regression in any other deep learning framework. This concept/idea isn't really specific to dl4j itself. It's following conventions you'd find in any instantation of the regression problem with a more general purpose framework.
I see you commented here as well:
DL4J linear regression
Replying to your "at least not helpful to me" , do you mind clarifying what you have issue with in the question? That would help a bit.
Answering your question, you'd just declare a neural network with your 5 inputs and declare how many ever inputs you have. You're allowed to declare more than 1 output if you find the objective to be suitable. In your case it's 1 output.
An example of this would be:
//Create the network
int numInput = 5;
int numOutputs = 1;
int nHidden = 10;
MultiLayerNetwork net = new MultiLayerNetwork(new NeuralNetConfiguration.Builder()
.seed(seed)
.weightInit(WeightInit.XAVIER)
.updater(new Nesterovs(learningRate, 0.9))
.list()
.layer(0, new DenseLayer.Builder().nIn(numInput).nOut(nHidden)
.activation(Activation.TANH) //Change this to RELU and you will see the net learns very well very quickly
.build())
.layer(1, new OutputLayer.Builder(LossFunctions.LossFunction.MSE)
.activation(Activation.IDENTITY)
.nIn(nHidden).nOut(numOutputs).build())
.build()
);
The above snippet comes from the dl4j examples.
In this case, you would be working on tuning the neural network with the appropriate output. I would suggest doing some broader reading on the topic if you want to understand the relationship between neural nets and regression.