When designing a distributed system, maybe based on microservices, and you are considering an event architecture, there are several models and technologies available. When choosing how to implement the architecture, the non-functional requirements are a main factor, David Dawson claimed when describing different styles of event architectures in a recent blog post.
Dawson, a freelance systems architect, defines an Event Architecture simply as a software architecture based on events, and since events are part of the data model, it’s also a Data Architecture. He emphasizes that it’s not a set of technologies or a specific model of how services interact.
A simple and well-established model is the Staged Event-Driven Architecture (SEDA), which essentially is a workflow process with components emitting events as a result of their processing, thereby driving the overall process. Events are commonly transported using a message bus of some form. Dawson notes that one important issue with this model is that events are short-lived and therefore can go missing during transport or if a component is offline. So instead of a system embracing eventual consistency, you get what he calls hopeful consistency. As long as everything works fine the system will be consistent, but when it breaks you will end up with an inconsistent system and will have to do a manual recovery to restore a consistent state. He calls this entity-oriented microservices and strongly advices against this style of architecture.
To be able to rebuild consistent state, the best solution for Dawson is to accept that events are data and persist the stream of events. You will then be able to replay the stream at any time to restore the state and by that get a real eventually consistent system. You will also get other benefits, including the possibility of having multiple views of the same stream of events.
Persisting streams of events is in Dawson’s experience often called event sourcing, but he believes this to be incorrect because it’s not about recreating state of a single entity, it’s about creating views of an unbounded set of entities. He therefore prefers to call this style stream processing. This is an architecture he thinks Kafka is well suited for. The Kafka client reads the events in order from the stream but may also replay them from the beginning or from a specific event when needed.
An aggregate is in DDD terms a set of entities inside a consistency boundary. By emitting and persisting events for all changes of an aggregate and building the state of the same aggregate by replaying the same events you get event sourced aggregate roots and this is what Dawson defines as true event sourcing. He notes the importance of having a separate and unique stream of events for rebuilding the aggregate. For other needs, like creating views, separate streams must be created.
To help him building systems based on an event architecture model, Dawson has created Muon Stack, a set of libraries and services for building distributed systems that are message and event oriented. Among other things it includes an event streaming API client and Photon, an event store, and he is currently working on a port to Kafka.