6

What I have is a system where I am reading from a model generating predictions in 3-4 separate processes continuously.

This is for a video game for Reinforcement Learning so I can not do workers/queues of data

Then I want to send the actions/rewards to a central process for learning after it updates the weights all the other processes will need updated weights too.

I have looked at https://www.tensorflow.org/deploy/distributed https://clusterone.com/blog/2017/09/13/distributed-tensorflow-clusterone/

Most examples are doing the opposite where the training is on the distributed machines.

How can I setup the task workers so the task they are running is just a prediction step instead of a train step?

train_step = (
            tf.train.AdamOptimizer(learning_rate)
            .minimize(loss, global_step=global_step)
            )

Will not work in my case unless I can grab data outside of it.

Also each process is created externally to my control so tensorflow can not create the processes.

It is similar to this question: How to run several Keras neural networks in parallel

But that question has no answers and it is based on thaneos where mine is on tensorflow.

Also similar to this: Running Keras model for prediction in multiple threads

But mine is in separate processes not threads

dtracers
  • 1,534
  • 3
  • 17
  • 37

0 Answers0