0

I'm trying to archiving an entity of a table. There are couple of ways to do this. One of them is to create IsArchived column and set it to true when an entity is deleted or put into history. One of the disadvantage of this design will make specified table so heavy.

Another way to do this is to create the duplication of the class of specified entity to be logged, make another table, and adding it to log table with the help of AutoMapper. In this case i need lots of duplicate classes of entities which needed to be archived.

Is there any other solutions for archiving specified entities?

kkocabiyik
  • 4,246
  • 7
  • 30
  • 40

3 Answers3

1

The best way would be to add a nullable ArchivedTimeStamp column to the table. This way, it is possible to tell if the row was archived or not, and if so, when it was archived.

If you are worried about the table size, you can partition the table and automatically move the archived rows onto a secondary / slower physical disk. You can even partition it in such a way that only rows that was, let say, archived over a year ago, must be moved to the secondary partition.

More info on on SQL archiving using partitioning can be found on http://www.mssqltips.com/sqlservertip/2780/archiving-sql-server-data-using-partitioning/

Jacques Snyman
  • 4,115
  • 1
  • 29
  • 49
0

You could have more than one database, with the same schema. You can then open a couple contexts, one to each database, using a different connection string. Query one, attach the entities to the other, and save.

I've never done this, but it should work. You might run into trouble since the entities are going to be attached to the source context and cannot be attached to the destination, but there are ways to unattach and reattach the entities.

Steve
  • 6,334
  • 4
  • 39
  • 67
  • I think there will be some problems on primary and foreign keys while attaching to another database with this design. – kkocabiyik Oct 15 '13 at 20:41
  • Didn't think of that. It wouldn't be difficult to recreate the relationships if necessary though. – Steve Oct 15 '13 at 21:06
0

I have implemented a soft delete for the purposes of undo. My answer shows how to overcome some of the problems normally associated with soft deletes - i.e. joins and indexes. It suits my purposes well. However, if it was used for archiving then the tables would grow forever.

Your other idea is to create duplicate classes and use automapper. That sounds like a lot of extra coding.

I think you could create a database with the same schema - except, perhaps, the primary keys would not be database generated, and foreign keys not enforced. Then override the delete so that the data is copied over.

Something like this:

public override int SaveChanges()
{
   foreach (var entry in ChangeTracker.Entries()
             .Where(p => p.State == EntityState.Deleted 
             && p.Entity is ModelBase))//I have a base class for entities with a single 
                                       //"ID" property - all my entities derive from this
    CustomDelete(entry);

    return base.SaveChanges();
}

private void CustomDelete(DbEntityEntry entry)
{
    var e = entry.Entity as ModelBase;
    string tableName = GetTableName(e.GetType());
    string sql = String.Format(@"INSERT INTO archive.{0} SELECT * FROM {0} WHERE ID = @id; 
                                 DELETE FROM {0} WHERE ID = @id", tableName);
    Database.ExecuteSqlCommand(
             sql
             , new SqlParameter("id", e.ID));
    entry.State = EntityState.Detached;
}

Note that in EF6 you could also override the delete by altering the sql in the migration file when mapping to stored procedures is used

Community
  • 1
  • 1
Colin
  • 22,328
  • 17
  • 103
  • 197