1

I have some code and would like to optimize the L1 cache miss/hit ratio. Is a way to see the cache hit/miss in memory profiling in Python ?

There are tools in C++ like this: Measuring Cache Latencies

EDIT : It may include compiled variant of Python like Cython / Numba (JIT)

Community
  • 1
  • 1
tensor
  • 3,088
  • 8
  • 37
  • 71
  • If you're looking for software or tutorials your question is probably off-topic. I haven't seen such software for python but it would be very interesting indeed. – MSeifert Dec 08 '16 at 01:36
  • 1
    I highly doubt it would be all that useful; an interpreted language like Python jumps all over the place in the interpreter, and the "hot spots" on the level of L1 cache data are tied to the interpreter design more than they are to any code you actually wrote. Even when you have some influence over it, the L1 cache misses you control would not affect runtime significantly; the interpreter overhead you don't control would likely be an order of magnitude higher. Getting fussy about the L1 cache is a very low level problem, and Python is anything but low level. – ShadowRanger Dec 08 '16 at 01:51
  • what about Cython and Numba compiled version ? – tensor Dec 08 '16 at 03:55

1 Answers1

1

Although specific for Python are not yet found, some 3rd party tools might be helfpul to investigate this technical issue :

Cachegrind: a cache and branch-prediction profiler http://valgrind.org/docs/manual/cg-manual.html

PyCacheSim (simulation only) : https://github.com/RRZE-HPC/pycachesim

tensor
  • 3,088
  • 8
  • 37
  • 71