0

I am trying to follow this example posted on git.
I want to modify the example and use data I have downloaded(wisconsin breast cancer dataset). I have it all transferred from csv to hdf5 file.
It is not clear to me how am I suppose to input this data to the network?
It consists of 700 rows and 11 columns which 1 of the columns is the 'label' column for prediction.
To my understanding each row should be inputed independently to other rows for correct training?

Thanks in advance

Shai
  • 111,146
  • 38
  • 238
  • 371

1 Answers1

0

Please see this answer on how to prepare HDF5 data for caffe's "HDF5Data" input layer.

Basically, you need to have two "datasets" inside the hdf5 file: one for the inputs and one for the label. Each dataset is a multi-dimensional array with the first dimension being the "batch" dimension. In your example, you have 700 examples of dimension 10 as input and 700 labels of dimension 1.

Shai
  • 111,146
  • 38
  • 238
  • 371