0

I'm creating a database to store a lot of events. There will be a lot of them and they will each have an associated time that is precise to the second. As an example, something like this:

Event
-----
Timestamp
ActionType (FK)
Source (FK)
Target (FK)

Actions, Sources, and Targets are all in 6NF. I'd like to keep the Event table normalized, but all of the approaches I could think of have problems. To be clear about my expectations for the data, the vast majority (99.9%) of events will be unique with just the above four fields (so I can use the whole row as a PK), but the few exceptions can't be ignored.

  1. Use a Surrogate Key: If I use a four-byte integer this is possible, but it seems like just inflating the table for no reason. Additionally I'm concerned about using the database over a long period of time and exhausting the key space.

  2. Add a Count Column to Event: Since I expect small counts I could use a smaller datatype and this would have a smaller effect on database size, but it would require upserts or pooling the data outside the database before insertion. Either of those would add complexity and influence my choice of database software (I was thinking of going with Postgres, which does upserts, but not gladly.)

  3. Break Events into small groups: For example, all events in the same second could be part of a Bundle which could have a surrogate key for the group and another for each event inside it. This adds another layer of abstraction and size to the database. It would be a good idea if otherwise-duplicate events become common, but otherwise seems like overkill.

While all of these are doable, they feel like a poor fit for my data. I was thinking of just doing a typical Snowflake and not enforcing a uniqueness constraint on the main Event table, but after reading PerformanceDBA answers like this one I thought maybe there was a better way.

So, what is the right way to keep time-series data with a small number of repeated events normalized?

Edit: Clarification - the sources for the data are logs, mostly flat files but some in various databases. One goal of this database is to unify them. None of the sources have time resolution more precise than to the second. The data will be used for questions like "How many different Sources executed Action on Target over Interval?" where Interval will not be less than an hour.

Community
  • 1
  • 1
polm23
  • 14,456
  • 7
  • 35
  • 59
  • I got fed up with the politics at SO and left. After 4 years, I have returned. The answer is incorrect. If you would like a better answer on this question, please comment to that effect, with my handle. – PerformanceDBA May 22 '15 at 14:30
  • @PerformanceDBA Unfortunately I missed your comment somehow. I don't work on the system described in this question anymore, and it looks like you've been gone a few years again, but if you're ever back and would like to explain the right way to do it I'd be glad to read it. – polm23 Oct 30 '17 at 06:12
  • @PerformanceDBA are you still available to give us a better answer to this question? – hiradyazdan Jul 21 '19 at 21:36

1 Answers1

4

The simplest answers seem to be

  • store the timestamp with greater precision, or
  • store the timestamp to the second and retry (with a slightly later timestamp) if INSERT fails because of a duplicate key.

None of the three ideas you mention have anything to do with normalization. These are decisions about what to store; at the conceptual level, you normalize after you decide what to store. What the row means (so, what each column means) is significant; these meanings make up the table's predicate. The predicate lets you derive new true facts from older true facts.

Using an integer as a surrogate key, you're unlikely to exhaust the key space. But you still have to declare the natural key, so a surrogate in this case doesn't do anything useful for you.

Adding a "count" colummn makes sense if it makes sense to count things; otherwise it doesn't. Look at these two examples.

Timestamp            ActionType  Source  Target
--
2013-02-02 08:00:01  Wibble      SysA    SysB
2013-02-02 08:00:02  Wibble      SysA    SysB

Timestamp            ActionType  Source  Target  Count
--
2013-02-02 08:00:01  Wibble      SysA    SysB    2

What's the difference in meaning here? The meaning of "Timestamp" is particularly important. Normalization is based on semantics; what you need to do depends on what the data means, not on what the columns are named.

Breaking events into small groups might make sense (like adding a "count" column might make sense) if groups of events have meaning in your system.

Mike Sherrill 'Cat Recall'
  • 91,602
  • 17
  • 122
  • 185
  • You're right, normalization isn't the best word for this. I edited the question to provide details about my sources and expected use - the most important thing is the count. While groups of events do have meaning, I can't determine the groupings at insert time. – polm23 Feb 20 '13 at 01:33
  • Store the count. Make the primary key {Timestamp, ActionType, Source, Target}. – Mike Sherrill 'Cat Recall' Feb 20 '13 at 12:23