I have a NN model below in jupyter notebook:
b = tf.Variable(tf.zeros([target_count]))
W = tf.Variable(tf.zeros([feature_count, target_count]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),
reduction_indices=[1]))
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
train_op =
tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
I am using following script to train this model
for epoch in range(training_epochs):
curr_data_batch, curr_target_batch = sess.run([data_batch, target_batch],
options=run_options)
init_op = tf.global_variables_initializer()
sess.run(init_op)
sess.run([train_op], feed_dict={x: curr_data_batch, y_:
curr_target_batch})
For the purpose of multi-label classification, I want to create n binary classifiers. How to modify this code to create n classifiers?
Do I need to create separate graph for each model?
As input data is same for all models but output y_ will be different, will I be able to share input data across models, if I create different graphs?
Do I need to create different session for each model?
If all models are in different graph, while predicting can I run all models on same input data and predicted results?