0

Everyone knows that we can get the value by the address in constant time. But it is really confusing me? How can him do that?

yodahaji
  • 717
  • 1
  • 5
  • 10
  • 1
    It's not exactly constant time. Real CPUs have caches, and DRAM page effects. Not to mention TLB misses in translating virtual addresses to physical. But anyway, see Ulrich Drepper's [What Every Programmer Should Know About Memory?](https://stackoverflow.com/a/47714514) for the low-level details of how DRAM address lines work: they decode addresses to an intersecting grid of lines, and only the DRAM cell with the right row *and* column is read. This is what happens after the CPU sends an address to a DRAM chip and waits for a burst of data to come back. – Peter Cordes Aug 18 '18 at 17:01
  • "for the low-level details of how DRAM address lines work: they decode addresses to an intersecting grid of lines, and only the DRAM cell with the right row and column is read." So this process is constant time? How can it do that? – yodahaji Aug 19 '18 at 03:59
  • Multiple transistors working in parallel. Read the linked PDF (and maybe learn some electronics if necessary). – Peter Cordes Aug 19 '18 at 04:04
  • Some constant times are 'k', others are 'K', the constants are just a bit bigger ... – Surt Aug 19 '18 at 08:25
  • Is "constant time" to be taken in the physical sense or the CS sense? Physically, not considering QM, reading from the DRAM is deterministic, so given the same conditions it takes the same time. In the CS sense, the time complexity *depends on the model*, arithmetic is considered a constant time operation in some field and a linear one in others. So is for memory, though we rarely formalize computations at this level. – Margaret Bloom Aug 21 '18 at 14:33
  • Thank you all for answering my question! – yodahaji Aug 22 '18 at 15:04

0 Answers0