0

I'm planning to do real-time augmentation in caffe. and these are the steps I have taken so far:
1.Replace Data layer with MemoryData in the network:

name: "test_network"
layer {
  name: "cifar"
  type: "MemoryData"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  memory_data_param {
   batch_size: 32
   channels: 3
   height: 32
   width: 32
  }

}
layer {
  name: "cifar"
  type: "MemoryData"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
   memory_data_param {
   batch_size: 32
   channels: 3
   height: 32
   width: 32
  }
} 

and this is the code for training :

caffe.set_mode_gpu()
maxIter = 100
batch_size = 32
j = 0
for i in range(maxIter):
    #fetch images 
    batch = seq.augment_images(np.transpose(data_train[j: j+batch_size],(0,2,3,1)))
    print('batch-{0}-{1}'.format(j,j+batch_size))
    #set input and solve
    batch = batch.reshape(-1,3,32,32).astype(np.float32)
    net.set_input_arrays(batch, label_train[j: j+batch_size].astype(np.float32))
    j = j + batch_size + 1
    solver.step(1)

but when the code reaches to the net.set_input_arrays(), it crashes with this error:

W0405 20:53:19.679730  4640 memory_data_layer.cpp:90] MemoryData does not transform array data on Reset()
I0405 20:53:19.713727  4640 solver.cpp:337] Iteration 0, Testing net (#0)
I0405 20:53:19.719229  4640 net.cpp:685] Ignoring source layer accuracy_training
F0405 20:53:19.719229  4640 memory_data_layer.cpp:110] Check failed: data_ MemoryDataLayer needs to be initalized by calling Reset
*** Check failure stack trace: ***

I cant find the reset() method, what should I do ?

Hossein
  • 24,202
  • 35
  • 119
  • 224
  • Spelling fixed long time ago at https://github.com/BVLC/caffe/commit/09546dbe9130789f0571a76a36b0fc265cd81fe3 – Cœur Feb 03 '18 at 13:15

1 Answers1

0

It seems MemoryDataLayer in Caffe is not meant to be used through pycaffe interface.

Yeah it's discouraged to use the MemoryDataLayer in Python. Using it also transfers memory ownership from Python to C++ with the Boost bindings and therefore causes memory leaks. Memory will only be released after the network object is destructed in python. So if you're training a network for a long time, you'll run out of memory. It's encouraged to use InputLayer instead, where you can just assign data from a numpy array into the memory blobs.

Link
As for the solution, these answers would be nice alternatives.

Community
  • 1
  • 1
Hossein
  • 24,202
  • 35
  • 119
  • 224