1

Relation between cache size and page size

How does the associativity and page size constrain the Cache size in virtually addressed cache architecture?

Particularly I am looking for an example on the following statement:
If C≤(page_size x associativity), the cache index bits come only from page offset (same in Virtual address and Physical address).

Community
  • 1
  • 1
  • That statement comes from page 11 of https://www.ece.cmu.edu/~ece447/s13/lib/exe/fetch.php?media=onur-447-spring13-lecture24-advancedcaching-afterlecture.pdf, which has a nice diagram. – Peter Cordes Oct 01 '17 at 17:08

1 Answers1

0

Intel CPUs have used 8-way associative 32kiB L1D with 64B lines for many years, for exactly this reason. Pages are 4k, so the page offset is 12 bits, exactly the same number of bits that make up the index and offset within a cache line.

See the "L1 also uses speed tricks that wouldn't work if it was larger" paragraph in this answer for more details about how it lets the cache avoid aliasing problems like a PIPT cache, but still be as fast as a VIPT cache.

The idea is that the virtual address bits below the page offset are already physical address bits. So a VIPT cache that works this way is more like a PIPT cache with free translation of the index bits.

Community
  • 1
  • 1
Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
  • Is it just I or the last sentence above is a bit misleading? – ultrajohn Nov 19 '18 at 04:43
  • 1
    @ultrajohn: misleading how? The low 12 bits of virtual and physical address are the same, only the page-number bits need translation. (12 bits for offset within a 4k page.) – Peter Cordes Nov 19 '18 at 05:16