Brewers CAP theorem is the best source to understand what are the options which are availbale to you. I can say that it all depends but if we talk about Mongo then it provides with the horizontally scalability out of the box and it is always nice in some situations.
Now about consistency. Actually you have three options of keeping your data up-to-date:
1)First thing to consider is "safe" mode or "getLastError()" as indicated by Andreas. If you issue a "safe" write, you know that the database has received the insert and applied the write. However, MongoDB only flushes to disk every 60 seconds, so the server can fail without the data on disk.
2) Second thing to consider is "journaling" (v1.8+). With journaling turned on, data is flushed to the journal every 100ms. So you have a smaller window of time before failure. The drivers have an "fsync" option (check that name) that goes one step further than "safe", it waits for acknowledgement that the data has be flushed to the disk (i.e. the journal file). However, this only covers one server. What happens if the hard drive on the server just dies? Well you need a second copy.
3)Third thing to consider is replication. The drivers support a "W" parameter that says "replicate this data to N nodes" before returning. If the write does not reach "N" nodes before a certain timeout, then the write fails (exception is thrown). However, you have to configure "W" correctly based on the number of nodes in your replica set. Again, because a hard drive could fail, even with journaling, you'll want to look at replication. Then there's replication across data centers which is too long to get into here. The last thing to consider is your requirement to "roll back". From my understanding, MongoDB does not have this "roll back" capacity. If you're doing a batch insert the best you'll get is an indication of which elements failed.
Anyhow there are a lot of scenarios when data consistency becomes developer's responsibility and it is up to you to be careful and include all the scenarios and adjust the DB schema because there is no "This is the right way to do it" in Mongo like we are used to in RDB-s.
About memory - this is totally a performance question, MongoDB keeps indexes and "working set" in RAM. By limiting your RAM your limit your working set. You can actually have an SSD and smaller amount of RAM rather than huge ammount of RAM and a HDD - at least these are official recommendations. Anyhow this question is individual, you should do the performance tests for your specific use cases