1

I've seen explanations for why RAM is accessed in constant time (O(1)) and why it's accessed in logarithmic time (O(n)). Frankly neither makes much sense to me; what is n in the big-O notation and how does it make sense to measure the speed at which a physical device operates using big-O? I understand an argument for RAM being accessed in linear time is if you have an array a then the kth element would be at address a+k*size_of_type (point is address can be easily calculated). If you know the address of where you want to load or store from, wouldn't that be a constant amount of time in the sense it will always take the same no matter the location? Someone told me looking up something in RAM (like an element in an array) take longer than O(n) because it needs to find the right page. This is wrong as paging pertains to the hard disk, not RAM.

Community
  • 1
  • 1
Celeritas
  • 14,489
  • 36
  • 113
  • 194
  • *"paging pertains to the hard disk, not RAM."* is far from true. In embedded implementations, RAM can be paged when there is more RAM available than can be addressed directly. – Weather Vane Dec 16 '15 at 13:07
  • Accessing one *element* of an array is O(1). Reading the *entire* array is O(n). Linear time, not logarithmic. It doesn't scale as neatly as O(n), the processor caches and the need to evict other RAM pages can make it pretty irregular. But big Oh doesn't worry about that. – Hans Passant Dec 16 '15 at 13:10
  • It seems to me that you don't understand the concept of the big-O complexity, I strongly advise reading up on that first. You're mixing algorithmical complexity with the hardware implementation details. Also, paging and segmentation is used by every modern CPU to solve the memory fragmentation problem, if nothing else. For example, google "TLB". – Violet Giraffe Dec 16 '15 at 13:23
  • @VioletGiraffe that's my argument (or maybe question), you can't apply big-O when discussing how long hardware takes to do something. – Celeritas Dec 16 '15 at 13:24
  • @Celeritas: for one, it's a huge breach of abstraction (for my lack of a better term) to associate `array` - a programming entity, with RAM - one of the levels in the memory hierarchy of a modern computer. The fact that you ask this question tells me you don't fully understand what you're asking about, which is why I recommend focusing on that first. – Violet Giraffe Dec 16 '15 at 13:26
  • @VioletGiraffe correct me if I'm wrong, but an array is a contiguous chunk of memory...http://en.cppreference.com/w/cpp/language/array "A declaration of the form T a[N];, declares a as an array object that consists of N contiguously allocated objects" – Celeritas Dec 16 '15 at 13:31
  • @Celeritas: it is contiguous in the application's address space. Not necessarily so in physical memory. Which is why I'm saying you should not mix the programming language abstractions with hardware implementation details. – Violet Giraffe Dec 16 '15 at 18:02
  • 2
    Big-O is not about time it takes to read data. It coud take an hour to read a byte and still be O(1) if the time is the same regardless of the amount of data (i.e. RAM size). – zvone Dec 16 '15 at 19:43
  • @VioletGiraffe so that's a good point that I did not know. Where did you learn about it? – Celeritas Dec 16 '15 at 23:56
  • There are a couple great books on the topic. Off the top of my head: "Structured Computer Organization" by Andrew Tannenbaum. Very educational, and must read for any programmer, IMO. – Violet Giraffe Dec 17 '15 at 07:02

1 Answers1

0

I think it is a nanoseconds value, which is way faster than accessing the disk (5 to 80 ms)