As far as event sourcing goes, I wanted to pause and collect where my head is on the events that will be emitted from the instantiated models (or aggregates and entities that they are sometimes called). I will not attempt to model a pure domain driven design and begin with my test and code first as I create this. I have been a database guy for nearly 20 years at this point, so I do tend to take a more data first approach.
I plan on keeping the materialize view in mind to ensure I can structure the schema and data from the events that are received. Then capture the events that make up those changes to schema and data. We will track domain level events in the same structure and events over time but in my mind, they are more for the business level code that will be written around the events. I will try to maintain focus on the elements that answer the questions for the materialized SQL tables.
First we need structure
In the SQL world the things that we use to define a structure are the Data Definition Language (DDL) events. The table name, the columns, the data types, default values and possibly some of the relationships between the data. I will not provide an opinion in the “DATA” events that will tell us anything about indexes. Indexes are a detail that is reserved for how fast a materialization can answer a question. We will not store the data types or length limits in “MS SQL” term but to define them a bit more universally. No need to put an event to store the [Floor] name field data type and length as NVARCHAR(50). We can just note in the “DML” event entry that it is a String and that the length is 50. That way we can materialize the view in a MS SQL database, JSON object in an object database, a CSV file, whatever we want in the future.
Then we need data
Now that we know the structure that the data will be stored, we need to track the data being initialized and its changes over time. In the SQL world the part that we use to define data changes are the data manipulation language (DML) events. The DML is what I will be keeping in mind as I for the events to ensure that they fit with answering the questions needed when changing the values in the SQL server. I will not be building a solution that will only work with SQL though, but my approach will be SQL first. While that is the case, any other downstream consumers of the events should be able to materialize the data how they need to.
Storing the changes
The data changes over time will be stored in the Event Source table along with the data structure changes over time. That way we can go back in time and see things as they were and track the necessary changes to structure, etc. over time. From a pure Event Sourcing perspective, this may be considered a bad approach but, I am a data guy and am biased to the database questions that need to be answered. The events for structure and the data changes will match the structure of the other domain level events that we will store and emit from our future API layer with no distinction.
I wanted to capture where my thoughts were at this point before moving on to the next step. I am certain that my opinions will change over time and new problems will be presented in future or caused by the decisions I have made already. Let’s track the changes in thought over time too and hopefully it will help someone along the way. We will keep all these things in mind as we move forward to declare the model and events to get a little more structure around these thoughts.