0

I made this neural net but every time I run this it gives me different loss to start with which remains constant for the complete loop. I want to predict one value in 'yy' for every 3 values in 'xx' as input. Also how can I show my output? For example: I want to show an array having predictions as close as possible to the values in 'yy'.

import tensorflow as tf

xx=(
        [178.72,218.38,171.1],
        [211.57,215.63,173.13],
        [196.25,196.69,116.91],
        [121.88,132.07,85.02],
        [117.04,135.44,112.54],
        [118.13,124.04,97.98],
        [116.73,125.88,99.04],
        [118.75,125.01,110.16],
        [109.69,111.72,69.07],
        [76.57,96.88,67.38],
        [91.69,128.43,87.57],
        [117.57,146.43,117.57]
      )

yy=(
        [212.09],
        [195.58],
        [127.6],
        [116.5],
        [117.95],
        [117.55],
        [117.55],
        [110.39],
        [74.33],
        [91.08],
        [121.75],
        [127.3]
       )


x=tf.placeholder(tf.float32,[None,3])
y=tf.placeholder(tf.float32,[None,1])
n1=5
n2=5
classes=12

def neuralnetwork(data):

    hl1={'weights':tf.Variable(tf.random_normal([3,n1])),'biases':tf.Variable(tf.random_normal([n1]))}   

    hl2={'weights':tf.Variable(tf.random_normal([n1,n2])),'biases':tf.Variable(tf.random_normal([n2]))}

    op={'weights':tf.Variable(tf.random_normal([n2,classes])),'biases':tf.Variable(tf.random_normal([classes]))}

    l1=tf.add(tf.matmul(data,hl1['weights']),hl1['biases'])
    l1=tf.nn.relu(l1)
    l2=tf.add(tf.matmul(l1,hl2['weights']),hl2['biases'])
    l2=tf.nn.relu(l2)
    output=tf.matmul(l2,op['weights'])+op['biases']
    return output

def train(x):
        pred=neuralnetwork(x)
       # cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels=y))
        sq = tf.square(pred-y)
        loss=tf.reduce_mean(sq)

        optimizer = tf.train.GradientDescentOptimizer(0.01)
        train = optimizer.minimize(loss)

        #optimizer=tf.train.RMSPropOptimizer(0.01).minimize(cost)
        epochs=100



        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            for epoch in range(epochs):
                epoch_loss=0
                for i in range (int(1)):
                    batch_x=xx
                    batch_y=yy
                  # a=tf.shape(xx)
                   #print(sess.run(a))
                    c=sess.run(loss,feed_dict={x:batch_x, y: batch_y})
                    epoch_loss+=c
                    print("Epoch ",epoch," completed out of ",epochs, 'loss:', epoch_loss)


train(x)
xposure
  • 35
  • 5

1 Answers1

1

I am not sure what exactly you are trying to accomplish, but it seems to me this is a regression problem, not a classification problem. I think the following code is what you want. I have cleaned it up a little bit but still tried to keep it in a way you would recognize it. I would personally write this in a different way.

import tensorflow as tf

xx = (
    [178.72, 218.38, 171.1],
    [211.57, 215.63, 173.13],
    [196.25, 196.69, 116.91],
    [121.88, 132.07, 85.02],
    [117.04, 135.44, 112.54],
    [118.13, 124.04, 97.98],
    [116.73, 125.88, 99.04],
    [118.75, 125.01, 110.16],
    [109.69, 111.72, 69.07],
    [76.57, 96.88, 67.38],
    [91.69, 128.43, 87.57],
    [117.57, 146.43, 117.57]
)

yy = (212.09, 195.58, 127.6, 116.5, 117.95, 117.55, 117.55,
      110.39, 74.33, 91.08, 121.75, 127.3)

x = tf.placeholder(tf.float32, [None, 3])
y = tf.placeholder(tf.float32, [None])


def neuralnetwork(data, n1=5, n2=5):
    hl1 = {'weights': tf.Variable(tf.random_normal([3, n1])), 'biases':
           tf.Variable(tf.random_normal([n1]))}

    hl2 = {'weights': tf.Variable(tf.random_normal([n1, n2])),
           'biases': tf.Variable(tf.random_normal([n2]))}

    op = {'weights': tf.Variable(tf.random_normal([n2, 1])), 'biases':
          tf.Variable(tf.random_normal([1]))}

    l1 = tf.add(tf.matmul(data, hl1['weights']), hl1['biases'])
    l1 = tf.nn.relu(l1)
    l2 = tf.add(tf.matmul(l1, hl2['weights']), hl2['biases'])
    l2 = tf.nn.relu(l2)
    output = tf.matmul(l2, op['weights']) + op['biases']
    return output


N_EPOCHS = 100
if __name__ == '__main__':
    pred = neuralnetwork(x)
    loss = tf.reduce_mean(tf.squared_difference(pred, y))

    optimizer = tf.train.GradientDescentOptimizer(0.01)
    train = optimizer.minimize(loss)

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch in range(N_EPOCHS):
            epoch_loss = sess.run([train, loss], feed_dict={x: xx, y: yy})[1]
            print("Epoch", epoch, " completed out of", N_EPOCHS, "loss:",
                  epoch_loss)

You are making two primary mistakes:

  1. You are trying to have 12 output nodes, what you probably want is a single node, which tries to predict the corresponding y values.

  2. You are not calling the train operation, so the optimizer is not actually doing anything.

Also how can I show my output? For example: I want to show an array having predictions as close as possible to the values in 'yy'

For example with these lines:

predictions = sess.run(pred, feed_dict={x: xx, y: yy})
print("Predictions:", predictions)

This would simply evaluate the part of the computational graph which is necessary to compute the pred tensor using the entire dataset as input by feeding it into the placeholder.

However, as you can see your network simply learns to predict the average value of your labels no matter the input.

  • Thankyou for the help. But what does " if __name__ == '__main__' " mean? And i tried your code as well, it is giving : Predictions: [[1073744.2] [1073744.2] [1073744.2] [1073744.2] [1073744.2] [1073744.2] [1073744.2] [1073744.2] [1073744.2] [1073744.2] [1073744.2] [1073744.2]] which is not even the average values of my labels – xposure Apr 21 '18 at 14:42
  • You are not getting the average value because 100 epochs are not enough, convergence starts at roughly 10^3 epochs. You might want to take a look into how to use TensorBoard or even matplotlib to visualize the learning process. – Alexander Harnisch Apr 23 '18 at 07:59
  • I would strongly advice you to try Google or the Stackoverflow search, before posting such questions. It works faster for everybody. You might want to read [this](https://stackoverflow.com/questions/419163/what-does-if-name-main-do?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa). – Alexander Harnisch Apr 23 '18 at 08:05
  • I would also advice you to try just working through some general Python and TensorFlow tutorials, there are plenty out there. I don't mean any offense but it seems to me like you do not really understand what you are doing or what TensorFlow actually does in the background for you. You could start [here](https://www.tensorflow.org/tutorials/). Just another word of advice I can give you is trying to at least in someway use a consistent code style for posting questions here, preferably [PEP8](https://www.python.org/dev/peps/pep-0008/). – Alexander Harnisch Apr 23 '18 at 08:06
  • Thankyou for your advice. I think I might need to gain more knowledge about the subject. But for now I have corrected some things and its giving me the average value for my labels as you said, Thankyou for your help. – xposure Apr 23 '18 at 09:07
  • Glad to help. Please don't forget to accept my answer, if you think it helped you. – Alexander Harnisch Apr 23 '18 at 09:24