I have an Entity Framework model with a table which is expected to hold a lot of data. I am concerned about using an int primary key because I expect it to grow bigger than that. I'm considering using Int64 to get around the issue.
Here is the real kicker - I am using table per type inheritance on the table in question. So if I go with Int64, there are going to be several other tables (actually an arbitrary number of tables since I will be adding more) that have to use an Int64 primary key, even though the likelihood of them growing beyond the bounds of int are pretty slim. Seems like an inefficient solution. Thoughts?
I was thinking of going with a composite key that consists of an int ID and a subtype discriminator, probably a char. I am wondering about the performance implications for this approach. I've always preferred surrogate keys unless it's a straightforward associative entity. Is there a better approach than either of these? Which one of these is the preferable route?
UPDATE:
I want to clarify that the composite key I'm considering isn't a natural composite key. It only contains the ID and a subtype discriminator. I am interested in using it as a way to avoid the size limitations of an int primary key. My main concerns are:
1) Is there a performance concern using Int64 primary keys? Especially in a table per type inheritance where it is only necessary for the parent table, but will also have to be used in the child tables.
2) Does using a composite key (non-natural) to achieve a larger range than an int primary key offer any performance advantages over using an Int64 key to do the same thing?