7

Lately I find myself writing data access layer select methods where the code all takes this general form:

public static DataTable GetSomeData( ... arguments)
{
    string sql = " ... sql string here:  often it's just a stored procedure name ... ";

    DataTable result = new DataTable();

    // GetOpenConnection() is a private method in the class: 
    // it manages the connection string and returns an open and ready connection
    using (SqlConnection cn = GetOpenConnection())
    using (SqlCommand cmd = new SqlCommand(sql, cn))
    {
        // could be any number of parameters, each with a different type
        cmd.Parameters.Add("@Param1", SqlDbType.VarChar, 50).Value = param1; //argument passed to function

        using (SqlDataReader rdr = cmd.ExecuteReader())
        {
            result.Load(rdr);
        }
    }

    return result;
}

Or like this:

public static DataRow GetSomeSingleRecord( ... arguments)
{
    string sql = " ... sql string here:  often it's just a stored procedure name ... ";

    DataTable dt = new DataTable();

    // GetOpenConnection() is a private method in the class: 
    // it manages the connection string and returns an open and ready connection
    using (SqlConnection cn = GetOpenConnection())
    using (SqlCommand cmd = new SqlCommand(sql, cn))
    {
        // could be any number of parameters, each with a different type
        cmd.Parameters.Add("@Param1", SqlDbType.VarChar, 50).Value = param1; //argument passed to function

        using (SqlDataReader rdr = cmd.ExecuteReader(CommandBehavior.SingleRow))
        {
            dt.Load(rdr);
        }
    }

    if (dt.Rows.Count > 0)
         return dt.Rows[0];
    return null;
}

These methods would be called by business layer code that then converts the base DataTable or DataRecord into strongly typed business objects that the presentation tier can use.

Since I'm using similar code repeatedly, I want to make sure this code is the best it can be. So how can it be improved? And, is it worth trying to move the common code from this out to it's own method. If so, what would that method look like (specifically with regards to passing an SqlParameter collection in)?

Joel Coehoorn
  • 399,467
  • 113
  • 570
  • 794
  • you do this just as I do - although I really like how you stack your dual using statements to avoid the deep nesting. well done! one thing I am interested in is how you logically/phyiscally setup your DAL vs. your BIZ - do you put this in its own project? or as a namespace? or what? – dfasdljkhfaskldjhfasklhf Jan 12 '09 at 16:50
  • If there are no parameters you can stack the datareader as well. DAL is in it's own assembly, DAL + BL will share a common parent namespace. – Joel Coehoorn Jan 12 '09 at 16:53
  • Yes thats how I do it now. What is your mapping between business objects and DAL objects? i.e. do you go 1:1? one thing I saw interesting was someone who pulled the DAL into the business object and generated it all out with CodeSmith. Very interesting – dfasdljkhfaskldjhfasklhf Jan 12 '09 at 16:59
  • It's not 1:1, but that's a peculiarity of the system here. The database I use is a snapshot dump from a mainframe, and the tables we see aren't what you'd call "usual". – Joel Coehoorn Jan 12 '09 at 17:12
  • ok but ideally in your own normalized system would you go 1:1, would you consolidate the dal objects the same way you'd do to a business object? i.e. lookup tables specific to one business object - great topic by the way, I want to improve this in my own work too – dfasdljkhfaskldjhfasklhf Jan 12 '09 at 17:28
  • Posting a sample of how your business object uses these methods would also help. How do you extract data from those DataTable objects? – Dan C. Jan 12 '09 at 17:40
  • Rather than creating the SqlConnection in the method, might try creating it outside the methods then passing it into the method. That way you can use the same SqlConnection in multiple calls, very useful when it is a transaction. – DavGarcia Jan 12 '09 at 17:49
  • @BPAndrew: it would probably be _mostly_ 1:1, but one of the things about software is that there are always exceptions. ;) It think if it were _always_ 1:1 there wouldn't be much point in having separate tiers; the benefit from tiering would be outweighed by the penalty of separating related code. – Joel Coehoorn Jan 12 '09 at 19:23
  • "one of the things about software is that there are always exceptions." ha! well done – dfasdljkhfaskldjhfasklhf Jan 12 '09 at 19:43
  • @David: I don't think I like that as much. It means I can't rely on the "using" construct to close the connection, but must instead put that code elsewhere. Instead, I think I'd want an entire transaction all in one SP if possible, or one DAL method that called several SPs sequentially if not. – Joel Coehoorn Jan 12 '09 at 20:22

7 Answers7

3

Had to add my own:
Return DataReader from DataLayer in Using statement

The new pattern enables me to only have one record in memory at a time, but still encases the connection in a nice 'using' statement:

public IEnumerable<T> GetSomeData(string filter, Func<IDataRecord, T> factory)
{
    string sql = "SELECT * FROM [SomeTable] WHERE SomeColumn= @Filter";

    using (SqlConnection cn = new SqlConnection(GetConnectionString()))
    using (SqlCommand cmd = new SqlCommand(sql, cn))
    {
        cmd.Parameters.Add("@Filter", SqlDbType.NVarChar, 255).Value = filter;
        cn.Open();

        using (IDataReader rdr = cmd.ExecuteReader())
        {
            while (rdr.Read())
            {
                yield return factory(rdr);
            }
            rdr.Close();
        }
    }
}
Community
  • 1
  • 1
Joel Coehoorn
  • 399,467
  • 113
  • 570
  • 794
2

Similar to what I posted here

public IEnumerable<S> Get<S>(string query, Action<IDbCommand> parameterizer, 
                             Func<IDataRecord, S> selector)
{
    using (var conn = new T()) //your connection object
    {
        using (var cmd = conn.CreateCommand())
        {
            if (parameterizer != null)
                parameterizer(cmd);
            cmd.CommandText = query;
            cmd.Connection.ConnectionString = _connectionString;
            cmd.Connection.Open();
            using (var r = cmd.ExecuteReader())
                while (r.Read())
                    yield return selector(r);
        }
    }
}

I have these simple extension methods to aid ease of calling:

public static void Parameterize(this IDbCommand command, string name, object value)
{
    var parameter = command.CreateParameter();
    parameter.ParameterName = name;
    parameter.Value = value;
    command.Parameters.Add(parameter);
}

public static T To<T>(this IDataRecord dr, int index, T defaultValue = default(T),
                      Func<object, T> converter = null)
{
    return dr[index].To<T>(defaultValue, converter);
}

static T To<T>(this object obj, T defaultValue, Func<object, T> converter)
{
    if (obj.IsNull())
        return defaultValue;

    return converter == null ? (T)obj : converter(obj);
}

public static bool IsNull<T>(this T obj) where T : class
{
    return (object)obj == null || obj == DBNull.Value;
}

So now I can call:

var query = Get(sql, cmd =>
{
    cmd.Parameterize("saved", 1);
    cmd.Parameterize("name", "abel");
}, r => new User(r.To<int>(0), r.To<string>(1), r.To<DateTime?>(2), r.To<bool>(3)));
foreach (var user in query)
{

}

This is fully generic, fits any model that comply to ado.net interfaces. The connection object and reader is disposed only after the collection is enumerated once.

Community
  • 1
  • 1
nawfal
  • 70,104
  • 56
  • 326
  • 368
2

One pattern I've enjoyed looks like this as far as client code goes:

        DataTable data = null;
        using (StoredProcedure proc = new StoredProcedure("MyProcName","[Connection]"))
        {
            proc.AddParameter("@LoginName", loginName);
            data = proc.ExecuteDataTable();
        }

I usually make connection optional, and I will code in a way to pull it from ConnectionStrings config section or treat it as the actual connection string. This lets me reuse the dal in one off scenario's and is in part a habbit from the COM+ days when I stored the connection string using the object construction property.

I like this because it's easy to read and hides all the ADO code from me.

JoshBerke
  • 66,142
  • 25
  • 126
  • 164
  • I do have some issues with this, but upvote because it gave me an idea: I'm gonna play with hiding the common code away as a Functor rather than in a method. – Joel Coehoorn Jan 12 '09 at 16:58
  • Interesting please share what you come up with. I'm not sold on this pattern, it's just as far as I got on 2.0. And it reduces the clutter – JoshBerke Jan 12 '09 at 17:06
  • Note that I said "play": I don't think I'll end up going that route. If I do come up with something interesting I'll post it back. – Joel Coehoorn Jan 12 '09 at 17:14
1

The only thing I do differently is I switched from my own internal database helper methods to the actual Data Access Application Block http://msdn.microsoft.com/en-us/library/cc309504.aspx

Makes it a little more standardized/uniform for other developers who know the enterprise library to ramp up on the code.

dfasdljkhfaskldjhfasklhf
  • 1,162
  • 2
  • 10
  • 18
1

There are so many ways to implement the DBAL, in my opinion you are on the right path. Somethings to consider in your implementation:

  • You are using a factory-like method to create your SqlConnection, it is a minor point but you can do the same for your SqlCommand.
  • The parameter length is optional, so you can actually leave it out of the Parameter.Add call.
  • Create methods for adding parameters too, code sample below.

Add parameters using DbUtil.AddParameter(cmd, "@Id", SqlDbType.UniqueIdentifier, Id);

internal class DbUtil {

internal static SqlParameter CreateSqlParameter(
    string parameterName,
    SqlDbType dbType,
    ParameterDirection direction,
    object value
) {
    SqlParameter parameter = new SqlParameter(parameterName, dbType);

    if (value == null) {
        value = DBNull.Value;
    }

    parameter.Value = value;

    parameter.Direction = direction;
    return parameter;
}

internal static SqlParameter AddParameter(
    SqlCommand sqlCommand,
    string parameterName,
    SqlDbType dbType
) {
    return AddParameter(sqlCommand, parameterName, dbType, null);
}

internal static SqlParameter AddParameter(
    SqlCommand sqlCommand,
    string parameterName,
    SqlDbType dbType,
    object value
) {
    return AddParameter(sqlCommand, parameterName, dbType, ParameterDirection.Input, value);
}

internal static SqlParameter AddParameter(
    SqlCommand sqlCommand,
    string parameterName,
    SqlDbType dbType,
    ParameterDirection direction,
    object value
) {
    SqlParameter parameter = CreateSqlParameter(parameterName, dbType, direction, value);
    sqlCommand.Parameters.Add(parameter);
    return parameter;
    }
}
DavGarcia
  • 18,540
  • 14
  • 58
  • 96
1

First, I think you already considered using an ORM vs. rolling your own. I won't go into this one.

My thoughts on rolling your own data access code:

  • Over time, I found it easier not to have separate DAL/BL objects, but rather merge them into a single object (some time later after reaching this conclusion I found out it's a pretty well known pattern - namely ActiveRecord). It might look nice and decoupled to have separate DAL assemblies, but the overhead in maintenance costs will add up. Everytime you add a new feature, you'll have to create more code/modify more classes. In my experience, the team that maintains the application is often way less than the original team of developers that built it, and they'll hate the extra work required.
  • For large teams, it might make sense to separate the DAL (and let a group work on it while the others. But this makes a good incentive for code bloat.
  • Coming down to your specific sample: how do you use the resulting DataTable? Iterate the rows, create typed objects and get the data from the row? If the answer is yes, think of the extra DataTable you created just for moving data between the DAL and the BL. Why not take it directly from the DataReader?
  • Also about the sample: if you return an untyped DataTable, then I guess you have to use the column names (of the result set the SP call returns) way up in the calling code. This means if I have to change something in the database, it might affect both layers.

My suggestion (I tried both methods - the suggestion is the latest working approach I came up with - it sort of evolved over time).

  • Make a base class for your typed business objects.
  • Keep object state in the base class (new, modified etc.)
  • Put the main data access methods in this class, as static methods. With a little effort (hint: generic methods + Activator.CreateInstance) you can create one business object per each row returned in the reader.
  • make an abstract method in the business object for parsing the row data (directly from the DataReader!) and fill the object.
  • make static methods in the derived business objects that prepare the stored proc parameters (depending on various filter criteria) and call the generic data access methods from the base class.

The aim is to end up with usage such as:

List<MyObject> objects = MyObject.FindMyObject(string someParam);

The benefit for me was that I only have to change one file in order to cope with changes in the database column names, types etc. (small changes in general). With some well thought regions, you can organize the code so that they're separate "layers" in the same object :). The other benefit is that the base class is really reusable from one project to another. And the code bloat is minimal (well, compared with the benefits. You could also fill datasets and bind them to UI controls :D

The limitations - you end up with one class per domain object (usually per main database table). And you can't load objects in existing transactions (although you could think of passing on the transaction, if you have one).

Let me know if you're interested in more details - I could expand the answer a bit.

Dan C.
  • 3,643
  • 2
  • 21
  • 19
-2

The simplest Solution :

var dt=new DataTable();
dt.Load(myDataReader);
list<DataRow> dr=dt.AsEnumerable().ToList();
Mohsen
  • 4,000
  • 8
  • 42
  • 73
  • This misses the point completely. It's also likely the slowest answer here, as it loads the entire result set in RAM not just once, but twice. – Joel Coehoorn Sep 10 '17 at 13:39