0

I'm working with big maps (golang) and dicts (python) and just wondering how much data a RAM write/read can be done at once?

So for example when I'm iterating through a 10GB dict, would each cache miss result in just fetching a DWORD(or how wide is the typical RAM access?) from RAM or will x86/kernel load a batch of related data to L3 (or queueing a set of load instructions to memory controller)?

In both cases, wouldn't processing big data structures be bottlenecked by RAM access latency? (+all the refresh cycles the memory controller has to deal with)

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
kpeteL
  • 45
  • 5
  • A whole cache line, 64 bytes, for normal memory access (not special regions marked uncacheable). And no, unless you have a linked list or tree, out-of-order exec can generate the next load address while a previous cache miss is still in flight. (memory level parallelism is very important for modern CPUs.) – Peter Cordes Mar 03 '22 at 14:10
  • Not exactly a duplicate of [What Every Programmer Should Know About Memory?](https://stackoverflow.com/q/8126311), but reading that will answer your question. Ah, found a more specific duplicate: [How do cache lines work?](https://stackoverflow.com/q/3928995) – Peter Cordes Mar 03 '22 at 14:12

0 Answers0