After reading the discussions about the differences between mesos and kubernetes and kubernetes-vs.-mesos-vs.-swarm, I am still confused about how to create a Spark and TensorFlow cluster with docker containers via some bear metal hosts and AWS like private cloud (OpenNebular).
Currently, I am able to build a static TensorFlow cluster with docker containers manually distributed to different hosts. I only run a stand alone spark on a bear metal host. The way of manually setup a mesos cluster for containers can be found here.
Since my resources are limited, I would like to find a way to deploy docker containers to the current mixed infrastructure to build either a tensorflow or spark cluster, so that I can do data analysis either with tensorflow or spark on the same resources.
Is it possible to create/run/undeploy a spark or tensorflow cluster quickly with docker containers on a mixed infrastructure with mesos or kubernetes? How can I do that?
Any comments and hints are welcome.