5

I am trying to feed some input (IP) v/s ideal (ID) data to encog neural network (BasicNetwork class). All the tutorials show the input format (MLData) to be like this:

IP11,IP12,IP13        ID11,ID12
IP21,IP22,IP23        ID21,ID22
some more values...

But I want to feed the data like this:

IP11,IP12,IP13
IP21,IP22,IP23        ID11,ID12

IP11,IP12,IP13
IP21,IP22,IP23        ID21,ID22

Basically I intend to associate a matrix of input values with an array of ideal values. Is there a way to do that using the encog framework?

Eagerly awaiting reply.

NRPanda
  • 53
  • 1
  • 4

1 Answers1

5

Nearly all machine learning models, neural networks included, accept a vector (one dimension) input. The only way to represent such 2D (or higher dimensional) data to the BasicNetwork (in Encog) is to flatten the matrix to a vector. A 8x8 matrix would be a 64-element vector. For a traditional feedforward neural network (BasicNetwork), it would not matter what part of the matrix maps to what element in the input vector. The fact that input #3 and input #4 are next to each other does not matter, they are all separate.

JeffHeaton
  • 3,250
  • 1
  • 22
  • 33
  • Is this a limitation for the current library? As it sounds like a reasonable requirement. Not being able to receive more dimensions input (therefore output), may pose a serious limitation. Flattening the data doesn't feel like the right way to go around it. – liang Jun 19 '15 at 17:35
  • 1
    Its a limitation of neural networks and most machine learning algorithms. Typically they receive an input vector. – JeffHeaton Jun 19 '15 at 17:35
  • How about the methods that learns from image? Images is inherently 2D. How to input the image then? – liang Jun 19 '15 at 17:38
  • 1
    Lots of info here: http://stackoverflow.com/questions/2084694/how-to-input-the-image-to-the-neural-network – JeffHeaton Jun 19 '15 at 17:39
  • Thanks for the great link. I still think this is quite a limitation, but if it's on machine learning algorithms in general, there's not much to do about it. However, in encog specifically, the TemporalPoint class receives a multiple dimension data for a data point. This gave me the wrong impression that multi-dimension temporal data is supported by encog. Why TemporalPoint class support multiple dimension in encog? – liang Jun 19 '15 at 17:55
  • It is a time series encoder that I provided. There are many, many ways to encode data for machine learning, I do not provide all of them. – JeffHeaton Jun 19 '15 at 17:57
  • Guess you're right. Hopefully in future version, there could be a 1 dimension temporal point class used by algorithms that supports 1d only, and multiple dimension temporal point for the rest. So it's more explicit. Just an idea. Thanks for the answer by the way. – liang Jun 19 '15 at 18:09