3

I am starting a project where I want to have multiple services that communicate with each other using the axon server.

I have more than one service with the following stack:

  • Spring Boot 2.3.0.RELEASE (with starters: Data, JPA, web, mysql)
  • Axon Spring Boot Starter - 4.2.1

Each one of the services uses different schemas in the mysql server.

When I start the spring boot service with the axon framework activated, some tables for tokens, sagas, etc are created in the database schema of each application.

I have two questions

  1. In the architecture that I am trying to build, should I have only one database for all the ‘axon enabled’ services, so the sagas, tokens, events, etc are only in one place?

  2. If so, can anyone provide an example of how to configure a custom EntityManagerProvider to have the database of the service separated from the database of Axon?

Alfredo
  • 159
  • 1
  • 3
  • 10

2 Answers2

3

I assume each of your microservices models a sub-domain. Since the events do model a (sub)domain, along with aggregates, entities and value objects, I very much favor keeping the Axon-related schemas separated, most likely along with the databases/schemas corresponding to each service. I would, thus, prefer a modeling-first approach when considering such technical options.

It is what we're currently doing in our microservices ecosystem.

There is at least one more technical reason to go with the same schema (one per sub-domain, that is), both for Axon assets and application-specific assets. It was pointed out to me by my colleague Marian. If you (will) use Event Sourcing (thus reconstructing the state of an aggregate by fetching and applying all past events resulted after handling the commands) then you will, most likely, need transactions which encompass this fetching as well as the command handling code which might, in turn, trigger (through events) writes to your microservice-specific database.

Octavian Theodor
  • 537
  • 6
  • 14
  • Thanks for your response @Octavian Theodor, then in order to do that I still need to create two datasources on each microservice, one for the microservice schemas and one for the axon tables, and that is the second part of my question, are you using a custom EntityManagerProvider in your setup? could you provide an example on how to use it? – Alfredo Jun 26 '20 at 09:14
  • 1
    Actually, we're currently storing the Axon-related tables in the same schema where we store the rest of our tables, etc. It was not me who set this up but we haven't encountered any issues, as of yet :) – Octavian Theodor Jun 26 '20 at 09:24
  • Considering that an event does model your domain and that, according to the answer here -- https://stackoverflow.com/questions/54933692/why-is-eventsourcinghandler-in-aggregate-object-needed, the state of an aggregate is not necessarily maintained as-is, but as a series of events, it also makes sense that at least the events should be stored along with the other domain-modeling things. Although @Steven might have a more informed opinion... – Octavian Theodor Jun 26 '20 at 09:29
3

Axon can require five tables, depending on your usages of Axon of course. These are:

  1. The Event table.
  2. The Snapshot Event table.
  3. The Token table.
  4. The Saga table.
  5. The Association Value Entry table.

When using Axon Server, tables 1 and 2 will not be created since Axon Server is the storage solution for events and snapshots. When not using Axon Server, I would indeed suggest to have a dedicated datasource for these.

Table 3 which services the TokenStore, should be as close as possible to your Query Models. The tokens portray how far a given EventProcessor is with handling events. As these EventProcessors typically service projectors which create your query models, keeping them together is sensible from a transactional perspective.

Table 4 and 5 are both required for Sagas. The "Saga table" stores the serialized sagas, whereas the "Association Value Entry table" carries the associations values between events and sagas so that the framework can load the right sagas. I'd store these either in a dedicated database or along with the other tables of the given (micro)service.

Steven
  • 6,936
  • 1
  • 16
  • 31
  • What's the added value in keeping (only) those two tables in a separate ds? In my mind, the events also model the domain, right? Also, I might be gravely mistaken but, when replaying these events (I am talking about the sourced aggregates) in order to perform some validation/logic and later on update and store, we might want to have all those in the same transaction, right? – Octavian Theodor Jun 29 '20 at 11:25
  • 1
    I am assuming that "those two tables" your are talking about the "event and snapshot event table". See it as the segregation between the Command Model, which are your events and snapshots when using event sourcing. You wouldn't want to obstruct the choice of event database to imposes constraints on what your query models are stored in. Those concerns need to be segregated, just so that you can choose for the most optimal database set up for those query models. – Steven Jun 29 '20 at 11:36
  • Added, the event/snapshot database can server as a means to distributed your Event Store too. Doing the same for your Query Model database might not always be what you desire. Hence again a reason to segregate these to allow for this flexibility. – Steven Jun 29 '20 at 11:37
  • Lastly, your second sentence about replaying events does not provide me a clear question to what you are looking for. Please elaborate there if you feel that's been unanswered up to now. – Steven Jun 29 '20 at 11:38
  • 1
    Ah, I _now_ see where you're aiming at, thanks for clarifying :) Indeed, in my mind, I was referring only to the so-called write (or command) model and never to the query one. Especially since in our app, the query model is an Elasticsearch index and not an RDB, hence they are "naturally" separated... – Octavian Theodor Jun 29 '20 at 12:20
  • And regarding my second sentence, I guess that the question was: "Does the event sourcing loading happens in the same transaction as the command handling and eventual command model update?" – Octavian Theodor Jun 29 '20 at 12:27
  • 1
    Ah gotcha, yes the transaction used to read an aggregate's event stream is the same which is used to handle the command and store (if any) subsequent events. – Steven Jun 29 '20 at 14:59