0

I have decided to use Kafka for an event sourcing implementation and there are a few things I am still not quite sure about. One is finding a good way of recreating my materialized views (stored in a Postgres database) in case of failures.

I am building a messaging application so consider the example of a service receiving a REST request to create a new message. It will validate the request and then create an event in Kafka (e.g. "NewMessageCreated"). The service (and possibly other services as well) will then pick up that event in order to update its local database. Let's assume however that the database has crashed so saving the order in the database fails. If I understand correctly how to deal with this situation I should empty the database and try to recreate it by replaying all Kafka events.

If my assumption is correct I can see the following issues:

1) I need to enforce ordering by userId for my "messages" topic (so all messages from a particular user are consumed in order) so this means that I cannot use Kafka's log compaction feature for that topic. This means I will always have to replay all events from Kafka no matter how big my application becomes! Is there a way to address this in a better way?

2) Each time I replay any events from Kafka they may trigger the creation of new events (e.g. a consumer might do some processing and then generate a new event before committing). This sounds really problematic so I am thinking if instead of just replaying the events when rebuilding my caches, I should be processing the events but disable generation of new events (even though this would require extra code and seems cumbersome).

3) When an error occurs (e.g. due to some resource failure or due to a bug) while consuming some message, should I commit the message and generate an error in a Kafka topic, or should I not commit at all? In the latter case this will mean that subsequent messages in the same partition cannot be committed either (otherwise they will implicitly commit the previous one as well).

Any ideas how to address these issues? Thanks.

Ruben Bartelink
  • 59,778
  • 26
  • 187
  • 249
George
  • 263
  • 3
  • 10
  • Why not GetEventStore? – Constantin Galbenu May 06 '17 at 18:36
  • @ConstantinGALBENU Thanks for answering. I am aware of GetEventStore but we have already committed on using Kafka (for other reasons as well), so that is unfortunately not an option. – George May 06 '17 at 19:04
  • Then I suggest to read more about event sourcing as I feel from your question that you have blured some general event-sourcing understanding (or may be just me not understanding you). – Constantin Galbenu May 06 '17 at 19:50
  • I am not experienced with ES for sure so this is why I am asking for help here :( At the same time I am reading whatever relevant information I can find but that is not the same as having the experience of having done this before of course. Can you elaborate on the things in my question you feel I have misunderstood or I failed to convey? – George May 07 '17 at 12:29
  • "... if instead of just replaying the events when rebuilding my caches, I should be processing the events but disable generation of new events..." - there are two kind of event subscribers: readmodels and sagas/process managers, only the second type generates new events but not directly, only by issuing other commands and you should not "disable" generation of those events as they are important. – Constantin Galbenu May 07 '17 at 13:47
  • 1
    You should read this before jumping into kafka as es: http://stackoverflow.com/q/17708489/2575224 – Constantin Galbenu May 07 '17 at 13:55
  • @ConstantinGALBENU thanks again for your comments. I had read that article as well as all comments (including the one from Jay Kreps). As I understand it there is some things I have to do somewhat different with Kafka as an event store. However, as I mentioned there is no option to go with something else at this point. From your comment I understand that subscribers of type "readmodels" are the ones who should only be involved in recreating the databases from Kafka, but not "sagas". Am I correct in this? Could you suggest of any references to get some more info on this? Thanks again. – George May 07 '17 at 15:32
  • Yes. Readmodels update some databases that are used by queries. Sagas generate commands based on events. One article about sagas is http://blog.jonathanoliver.com/cqrs-sagas-with-event-sourcing-part-i-of-ii/ – Constantin Galbenu May 07 '17 at 18:31
  • Thank for the reference, that was a good read. My feeling is that an implementation of Sagas using Kafka might be a little different (due to the fact that Kafka is not a DB), so I will try to search a little bit more to see if there are any relevant implementations out there (from a quick search that does not seem to be the case). – George May 08 '17 at 09:43
  • Sagas need only-once-delivery. You can implement that over any event store implementation using a SagaTracker - a component that "remembers" what events it has already dispatched. – Constantin Galbenu May 08 '17 at 09:58
  • Or with any messaging infrastructure that can give you only-once, at-least-once, at-most-once and in-order delivery – Constantin Galbenu May 08 '17 at 10:01
  • I guess you mean exactly-once processing of each event. I understand that sagas help in orchestrating a long running flow (spanning multiple events/services) and so they need to keep track of the current state as well as to handle compensating actions. I expect doing this with kafka will involve some intricacies. For example, is it better to implement this as a process manager or a routing slip? (I feel the latter fits better with Kafka but is more complicated to implement) How does the saga saves its current state in Kafka? How does it recreate its state after failures, etc. – George May 08 '17 at 10:37
  • Yes, you are right, you need all except in-order as Sagas could be designed to cope with event ordering. – Constantin Galbenu May 08 '17 at 10:59
  • Fir Sagases persistence you whould need some CRUD-style persistence or you could try to event-source even them (you would need just to remember what events you have processed and only apply them as opposed to process them) – Constantin Galbenu May 08 '17 at 11:05
  • For state-recreation it depends on your technology. If my Saga needs aditional event-type as a result of business rules chsnge I use a SagaRecreator that sends only unprocesed events. – Constantin Galbenu May 08 '17 at 11:12
  • For full-state-recreation you would have to have two methods per event: applyEventX and processEventX and call only apply for already-processed events. – Constantin Galbenu May 08 '17 at 11:17
  • Thanks, you have been most helpful. I want to ask one last thing. Assume MsgService accepts a REST request to create a new message and it generates a new event to signify that the message is created. MsgService picks up that event to update its DB. Should it then generate a new event to signify that the message was actually created or is the 1st event sufficient? Other services might use that event to update their own caches, so should they use the 1st event (sent before the MsgService's DB was updated) or the 2nd? I feel the 1st is enough and I guess the same answer applies to Sagas as well. – George May 08 '17 at 11:39
  • It should be enough. Events should be created only when something *real* happens. You may refer to the `infrastructure events` that could be useful but those are not generally persisted. – Constantin Galbenu May 08 '17 at 11:42

0 Answers0