We have many batch jobs which today are scheduled via cron expressions in a single application. We would like to isolate these jobs more and therefore move them to spring cloud task.
But reading the documentation [1], I come to the conclusion that I have to use a triggertask
(source) which in turn sends a TaskLaunchRequest
to a tasklauncher
(sink) to finally launch the new process.
This means (if I have only one task/batch) I need at least the following JVM processes running to trigger one new process:
- flow server
- triggertask (source)
- tasklauncher (sink)
OK, flow server and tasklauncher will be shared for any upcoming task, but triggertask can only take the cron definition for a single task and therefore has to replicated for any upcoming taskdefinition. So I need at least one "nanny process" for each task?
really??? this sounds like a huge overkill... From my point of view, I would have expected cron scheduling is a core functionality of the task definition, so the only thing needed would be the flow server.
Do I understand this correct or is there anything I have missed? Is there an easier way to do this in the spring cloud environment? I really like the idea having a flow server starting new JVMs when required, but all these additional process really feel to be the wrong approach.
If this should run on CloudFoundry e.g. http://run.pivotal.io then this means I have a cron scheduler for a single job costing my 35$/Mth (because from Java BuildPack 4.0 JVM Process with only 512MB will not start anymore [2]) - that's an expensive cron definition...
[1] https://github.com/spring-cloud/spring-cloud-stream-app-starters/tree/master/triggertask/spring-cloud-starter-stream-source-triggertask [2] https://www.cloudfoundry.org/just-released-java-buildpack-4-0/