7

I have a network consist of multiple sub-network (multiple convolution net and at last one fully connected + soft max layer). Every ConvNet feed with specific region and size of images. so, to feed my network I write image place holder for every convnet input and one label place holder to feed label of all images in one batch (all of the input images in all convnet inputs have the same labels). Unfortunately I don't have any idea for feed_dict part. for example this code is for only one convnet training:

    images_r, labels_r = sess.run([images, labels])
    train_feed = {images_placeholder: images_r,
              labels_placeholder: labels_r}
    _, loss_value = sess.run([train_op, loss_func], feed_dict=train_feed)

How can I extend above code for feed all conv nets?

Vijay Mariappan
  • 16,921
  • 3
  • 40
  • 59
hajbabaeim
  • 156
  • 2
  • 10
  • I'm not sure I get your question right. You have multiple input points for the network? And if so, at each training step you need to provide an input for all such input points? – GPhilo Jul 04 '17 at 08:05

2 Answers2

5

So for each of the conv networks, if the placeholders for inputs are: conv_1_input, conv_2_input.... conv_N_input, then you pass the list in the feed_dict like this:

train_feed = {`conv_1_input`: image_1, `conv_2_input`: image_2,.. `conv_N_input`: image_N,
          labels_placeholder: labels_r}
_, loss_value = sess.run([train_op, loss_func], feed_dict=train_feed)
Vijay Mariappan
  • 16,921
  • 3
  • 40
  • 59
-2

You should split/slice your images inside computation graph and use a single input instead.

nttstar
  • 341
  • 2
  • 10
  • Unfortunately all of my patches (from original images) have not the identical feature and I extract them from original images based on some landmarks. (for example consider we have an original image of a human face and my landmarks are 5 points of center of eyes, nose tip, and corner of lip). So, I have to use my extracted patch to feed all related feed_dict. – hajbabaeim Jul 04 '17 at 07:35