I have a slightly different take on this; when profiling (with our shiny profiler) we noticed that LINQ (to SQL in this case) was doing a reasonable job generating TSQL for basic queries, and the operation was running very fast at the DB server (0.5ms etc) - however, the actual query operation was taking MUCH longer (like 20ms+ for the same 0.5ms query, in some cases). So where was the time? You might think "query translation", but no; we also have a lot of ExecuteQuery<T>
code (i.e. where you write the TSQL by hand) and this was doing exactly the same thing. It turned out that somewhere between the materializer and the identity map vast amounts of time was being lost.
So; we wrote our own materializer that was pretty-much a drop-in replacement for ExecuteQuery
- and thus was born dapper.
On more of the LINQ side, it generally does OK at generating TSQL for simple queries, but for anything complex I usually trust hand-coded TSQL a lot more. To take a case as a sample, I had a complex query involving groups, skips and takes. It didn't perform well. When I wrote it by hand with ROW_NUMBER() etc the same results took 4% of the "stats IO" and total time.
My current opinion on LINQ is that the ORM tools make data mutation a breeze, but for query I tend to use dapper. Which is ironic since the Q in LINQ is "query".