I have had some experience with creating Neural networks graphs with input as tensorflow placeholders . Until now , i used to believe that those graphs could be evaluated with something like sess.run()
or more precisely as described here.
I was learning to see how a GAN works and came across this tutorial where the author creates a function(time 11:00 in the video) :
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import time
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(7,(3,3) , padding = "same" , input_shape = (28,28,1)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dense(50,activation = 'relu'))
model.add(tf.keras.layers.Dense(1))
return model
And then evaluates a forward pass as follows :
model_discriminator = make_discriminator_model()
model_discriminator(np.random.rand(1,28,28,1).astype("float32"))
He gets the following output :
<tf.Tensor: id=161 ,shape=(1,1) ,dtype=float32 , numpy=array([[0.01451516]],dtype = float32)>
The value numpy=array([[0.01451516]]
is the output of the forward pass.
On running the same code , i get a less infromative tensor which is :
<tf.Tensor 'sequential_5/dense_11/BiasAdd:0' shape=(1, 1) dtype=float32>
Is the difference due to difference version of tensorflow in the environment ? I am using tensorflow 1.14.0
, not sure about the one used in the video.