I'm creating a database to store a lot of events. There will be a lot of them and they will each have an associated time that is precise to the second. As an example, something like this:
Event
-----
Timestamp
ActionType (FK)
Source (FK)
Target (FK)
Actions, Sources, and Targets are all in 6NF. I'd like to keep the Event
table normalized, but all of the approaches I could think of have problems. To be clear about my expectations for the data, the vast majority (99.9%) of events will be unique with just the above four fields (so I can use the whole row as a PK), but the few exceptions can't be ignored.
Use a Surrogate Key: If I use a four-byte integer this is possible, but it seems like just inflating the table for no reason. Additionally I'm concerned about using the database over a long period of time and exhausting the key space.
Add a Count Column to Event: Since I expect small counts I could use a smaller datatype and this would have a smaller effect on database size, but it would require upserts or pooling the data outside the database before insertion. Either of those would add complexity and influence my choice of database software (I was thinking of going with Postgres, which does upserts, but not gladly.)
Break Events into small groups: For example, all events in the same second could be part of a
Bundle
which could have a surrogate key for the group and another for each event inside it. This adds another layer of abstraction and size to the database. It would be a good idea if otherwise-duplicate events become common, but otherwise seems like overkill.
While all of these are doable, they feel like a poor fit for my data. I was thinking of just doing a typical Snowflake and not enforcing a uniqueness constraint on the main Event
table, but after reading PerformanceDBA answers like this one I thought maybe there was a better way.
So, what is the right way to keep time-series data with a small number of repeated events normalized?
Edit: Clarification - the sources for the data are logs, mostly flat files but some in various databases. One goal of this database is to unify them. None of the sources have time resolution more precise than to the second. The data will be used for questions like "How many different Sources executed Action on Target over Interval?" where Interval will not be less than an hour.