I see you've read a lot about it, but allow me to point you to a very good explanation on the dangers of using NOLOCK (that's it READ UNCOMMITTED isolation level): SQL Server NOLOCK Hint & other poor ideas.
Apart from this, I'll make some citations and comments. The worst part of NOLOCK is this:
It creates “incredibly hard to reproduce” bugs.
The problem is that when you read uncommited data, most of the time is commited, so everything is alright. But it will randomly fail if the transaction is not comitted. And that doesn't usually happen. Right? Nope: first, a single error is a very bad thing (your customer don't like it). And second, things can get much worse, LO:
The issue is that transactions do more than just update the row. Often they require an index to be updated OR they run out of space on the data page. This may require new pages to be allocated & existing rows on that page to be moved, called a PageSplit. It is possible for your select to completely miss a number of rows &/or count other rows twice. More info on this in the linked article
So, that means that even if the uncommited transaction you've read is committed, you can still read bad data. And, this will happen at random times. That's ugly, very ugly!
What about corruption?
As Remus Rusanu said, it's not "hard" but "soft" corruption. And it affects specially aggregates, because you're reading what you shouldn't when updating them. This can lead for example to a wrong account balance.
Haven't you heard of big LOB apps that have procedures to rebuild account balances? Why? They should be correctly updated inside transactions! (That can be acceptable if the balances are rebuilt at critical moments, for example while calcultaing taxes).
What can I do without corrupting data (and thus is relatively safe)?
Let's say it's "quite safe" to read uncommited data when you're not using it to update other existing data on the DB. I.e. if you use NOLOCK
only for reporting purposes (without write-back) you're on the "quite safe" side. The only "tiny trouble" is that the report can show the wrong data, but, at least, the data in the DB will keep consistent.
To consider this safe depends on the prupose of what you're reading. If it's something informational, which is not going to be used to make decissions, that's quite safe (for example it's not very bad to have some errors on a report of the best customers, or the most sold products). But if you're getting this information to make decissions, things can be much worse (you can make a decission on a wrong basis!)
A particular experience
I worked on the development of a 'crowded' application, with some 1,500 users which used NOLOCK for reading data, modifying it an updating it on the DB (a HHRR/TEA company). And (apparently) there were no problems. The trick was that each employee read "atomic data" (an employee's data) to modify it, and it was nearly impossible that two people read and modified the same data at the same time. Besides this "atomic data" didn't influence any aggregate data. So everything was fine. But from time to time there were problems on the reporting area, which read "aggregated data" with NOLOCK. So, the critical reports had to be scheduled for moments where noone was working in the DB. The small deviations on non-critical reports was overlooked and admittable.
Now you know it. You have no excuses. You decide, to NOLOCK or not to NOLOCK