5

O(log(n)) to retrieve item in RAM

I have been running some timing experiments for how long it takes to retrieve a random element i from an array a of length n, and have found my hardware takes approximately log(n). I am trying to reason what bit of the hardware\architecture is causing this, or might be responsible. (A citable technical reference is ideally what I'm after).

What the code looks like

It's C code that resembles the following:

unsigned long long int n; // Some very big number!
double * lookup_table = malloc(sizeof(double) * n); // So big it only fits in RAM
unsigned long long int i = UNIFORM_RNG(0, n); // Integer in [0, n)
double x = lookup_table[i];

The key idea is that there is a big array lookup_table whose entries I will need randomly (hence I can with confidence expect cache/page misses). The array is so big I know the value will need to be dug out from RAM.

Parts of the code above are wrapped in a for loop, so I am moderately confident I am timing what I think I am timing and have the required precision.

Related questions

How much time does a program take to access RAM? (mentions O(1) and O(log(n)) but doesn't seem to provide any explanation pointing to hardware).

The hardware I've been using

64-bit x86 Intel Xeon Gold 6140 CPU operating at 2.3GHz under GNU/Linux 4.13.0-25-generic under the Ubuntu 16.04.3 LTS distribution. C code was compiled using the Intel C compiler icc 18.0.0 with the flags mkl, std=c11, qopenmp, qopt-report, O2, march=skylake-avx512, xCORE-AVX512, and qopt-zmm-usage=high.

Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              72
On-line CPU(s) list: 0-71
Thread(s) per core:  2
Core(s) per socket:  18
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               85
Model name:          Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
Stepping:            4
CPU MHz:             1000.410
BogoMIPS:            4600.00
Virtualisation:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
L3 cache:            25344K
NUMA node0 CPU(s):   0-17,36-53
NUMA node1 CPU(s):   18-35,54-71
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp pku ospke md_clear flush_l1d
oliversm
  • 1,771
  • 4
  • 22
  • 44
  • probably depends on the architecture? – Alberto Sinigaglia Jul 07 '20 at 10:55
  • @Berto99 I have found this on Intel hardware. Will add a few of the specs to the question. – oliversm Jul 07 '20 at 10:57
  • are you sure that is not the allocation of the memory that is causing the complexity to be log(n)? – Alberto Sinigaglia Jul 07 '20 at 11:00
  • So this is a single threaded program where I ensure only 1 thread is concurrently running. The array is created and filled once, and then I time accessing it. Hence when the timing starts it is allocated already. – oliversm Jul 07 '20 at 11:04
  • it better fits cs.stackexchange.com ? – Curcuma_ Jul 07 '20 at 11:05
  • can you please post the real code with the std::chrono (if you are using C++)? – Alberto Sinigaglia Jul 07 '20 at 11:06
  • @Berto99, It's C and not C++, and the timing was run by a `clock()`, then a for loop wrapping many queries to RAM (which are "safely" assumed to be the dominant cost as all other variables are expected to be in the L1 cache, and at worst L2), then another call to `clock()`. – oliversm Jul 07 '20 at 11:11
  • 1
    @Curcuma_ I don't think so as I think this is likely resultant from hardware and implementation details rather than CS considerations. – oliversm Jul 07 '20 at 11:25
  • Why not post the actual code, n and time values? – matt Jul 17 '20 at 08:13
  • @matt , it's not yet publicly available, but should be in a few months after I have finished writing my thesis. – oliversm Jul 17 '20 at 08:42
  • 1
    You're claiming that this just requires a loop with a array look up to demonstrates the issue. If you cannot reduce this down to an example that demonstrates this issue, then the issue is greater than what you're proposing. Being able to compiling the code and see the machine instructions could make a difference in this case. – matt Jul 17 '20 at 09:26

0 Answers0