Using a file or a database (not located purely in memory (RAM)) is certainly slower than operating purely in memory, but only by a constant factor (let's say one operation is 100x faster from memory - no matter how many times we do it, it will always be 100x faster, thus it's just a constant factor of 100).
Asymptotic complexity (big-O, big-Omega, big-Theta, etc.) of course ignores constant factors
(O(n) = O(10000 n)
). (I'm sure one of the answers here will give some intuition into this, if need be).
So it doesn't affect the running time complexity.
Whether a file or database will be faster is dependent on multiple factors, among them:
- Network speed if a non-local database
- Hard drive speed
- What type of operations you want to do
For simple write or one-time read operations, a file should, in theory, be faster (but only a little, hopefully), as databases usually persist to files as well, and have some added complexity. For repeated read operations, a database may be a lot faster, as results can be cached in memory, not requiring it to be read from file. For complex operations, databases usually perform better. All in all, databases tend to be preferred, but this is really something to be benchmarked to get accurate results.