As mentioned in a comment, SQL Server does not support one million columns. However, you should be able to observe the problem (under the right circumstances) with just two columns.
The answer is "yes", when using READUNCOMMITTED
(or NOLOCK
). Finding a reference in the documentation is quite difficult, but here is a blog that talks about it.
The key issue are rows that span multiple pages. This is the case whenever you have a LOB or varchar(max)
or similar type of column. If another transaction is updating pages containing values for the large object, then a concurrent READUNCOMMITTED
query could read partial values. This is also true if you hae multiple large objects; some might have new values and some might have old values. So, you can get inconsistent results within a single record.
I don't think the same thing would happen (at least in practice) for a record stored on a single page. It might happen if you have a multi-statement transaction that updates different fields at different times, but not for a single statement.
Also, this should not happen with higher isolation levels. In effect, READ_UNCOMMITTED
is a way to by-pass some of the database integrity checks for performance -- which entails a necessary relaxation of some of the ACID properties.