2

I have a model built using tf.keras that has two inputs ("input_1" and "input_2") which are fed to different branches in the network. The final output is a single output. Since the size of my data is large I want to use tf.data to handle the input pipeline.

I tried the solution provided here: https://stackoverflow.com/a/52661189/5956578

However, when I run model.fit using that solution I get the error:

Invalid argument: You must feed a value for the placeholder tensor 'input_2' with dtype float and shape [20,375,1242,1]
[[{{node input_2}}]]

I realised this is because the output of the Dataset is a dict made up of "input_1" and "input_2", which model.fit is unable to feed properly to the respective inputs.

Any help to fix this, or an alternative solution to train a multiple-input tf.keras network would be appreciated.

EDIT: Here's the relevant parts of my code according to the solution in the above link:

...

input_1 = tf.keras.layers.Input(name='input_1', batch_size=batch_size, shape=(IMAGE_HEIGHT, IMAGE_WIDTH, 3))
input_2 = tf.keras.layers.Input(name='input_2', batch_size=batch_size, shape=(IMAGE_HEIGHT, IMAGE_WIDTH, 1))
output = KerasFunctionalAPINet(input_1, input_2)
model = tf.keras.models.Model(inputs=[input_1, input_2], outputs=output, name='Network')

...

def train_generator():
    for i in range(100):
        # Code to get images "source_1", "source_2" and labels "labels" from another python module
        yield {"input_1": source_1, "input_2": source_2}, labels
train_set = tf.data.Dataset.from_generator(train_generator, output_types=({"input_1": tf.float32, "input_2": tf.float32}, tf.float32), output_shapes=({"input_1": (IMAGE_HEIGHT, IMAGE_WIDTH, 3), "input_2": (IMAGE_HEIGHT, IMAGE_WIDTH, 1)}, (6, 1)))
train_set = train_set.batch(batch_size*2, drop_remainder=True)

def test_generator():
    for i in range(100):
        # Code to get images "source_1", "source_2" and labels "labels" from another python module
        yield {"input_1": source_1, "input_2": source_2}, labels
test_set = tf.data.Dataset.from_generator(test_generator, output_types=({"input_1": tf.float32, "input_2": tf.float32}, tf.float32), output_shapes=({"input_1": (IMAGE_HEIGHT, IMAGE_WIDTH, 3), "input_2": (IMAGE_HEIGHT, IMAGE_WIDTH, 1)}, (6, 1)))
train_set = train_set.batch(batch_size*2, drop_remainder=True)

...

model.fit(
    train_set,
    steps_per_epoch=(int)(no_of_samples/batch_size),
    epochs=epochs,
    validation_data=test_set,
    validation_steps=(int)(no_of_samples/batch_size),
    shuffle=True,
    verbose=1
)
Sanat
  • 133
  • 1
  • 1
  • 9

1 Answers1

0
yield [source_1, source_2], labels
Daniel Möller
  • 84,878
  • 18
  • 192
  • 214
  • Thanks. I thought of this, but I'm having trouble defining the output_types and output_shapes for the dataset. Could you please help with that? – Sanat Dec 10 '19 at 04:41