Here, I read some tensorflow implementation of style transfer. Specifically, it defines the loss which is then to be optimized. In one loss function, it says: `
def sum_style_losses(sess, net, style_imgs):
total_style_loss = 0.
weights = args.style_imgs_weights
for img, img_weight in zip(style_imgs, weights):
sess.run(net['input'].assign(img))
style_loss = 0.
for layer, weight in zip(args.style_layers, args.style_layer_weights):
a = sess.run(net[layer])
x = net[layer]
a = tf.convert_to_tensor(a)
style_loss += style_layer_loss(a, x) * weight
style_loss /= float(len(args.style_layers))
total_style_loss += (style_loss * img_weight)
`
The optimizer is called with the current session:
optimizer.minimize(sess)
So the session is up and running, but during the run, it calls further run
s in the for loop. Can anyone exlain the tensorflow logic, especially why x
contains the feature vector of input image (and not of the style image). For me, there seem to be two runs
in parallel.