A very simple approach is to make the loop inside your own model:
inputs = Input(...)
#part 1 layers:
layer1 = SomeLayer(...)
layer2 = SomeLayer(...)
layer3 = SomeLayer(...)
intermediateLayer = IntermediateLayer(...)
#first pass:
out = layer1(inputs)
out = layer2(out)
out = layer3(out)
intermediate_out = intermediateLayer(out)
#second pass:
out = layer1(intermediate_out)
out = layer2(out)
out = layer3(out)
second_pass_out = intermediateLayer(out)
#rest of the model - you decide wheter you need the first pass or only the second
out = SomeLayer(...)(second_pass_out)
out = SomeLayer(...)(out)
...
final_out = FinalLayer(...)(out)
The model then goes:
model = Model(inputs, final_out)
You can, depending on your purposes, make only the second pass participate in training, blocking gradients from the first pass.
#right after intermediate_out, before using it
intermediate_out = Lambda(lambda x: K.stop_gradients(x))(intermediate_out)
You can also create more models that will share these layers, and use each model for a purpose while they will always be updated together (as they use the same layers).
Notice that in "part 1", there are layers that get "reused".
While in "rest of the model" the layers are not "reused", if for some reason you need to reuse the layers for the second part, you should do it the same way it was done for "part 1".