1

I have a flink streaming job which reads from Kafka and writes into appropriate partitions in file system. For instance, the job is configured to use a bucketing sink which writes to /data/date=${date}/hour=${hour}.

How to detect that the partition is ready to be used so that a corresponding airflow pipeline can do some batch processing on top of that hour?

  • This looks like a variation of https://stackoverflow.com/questions/54094729/process-elements-after-sinking-to-destination, yes? – kkrugler Jan 11 '19 at 19:51
  • No, the last question assumes a certain way of doing it, while this asks more about what would be the right way to do it. – Achyuth Samudrala Jan 12 '19 at 06:42

1 Answers1

1

You could look at the implementation of the ContinuousFileMonitoringSource, to see how it monitors the file system. And then do something similar to what David Anderson suggested in your other question, re creating a custom ProcessFunction.

kkrugler
  • 8,145
  • 6
  • 24
  • 18