7

I have been doing some testing to see how much of a difference additional bounds checking makes in loops. This is prompted by thinking about the cost of implicit bounds checking inserted by languages such as C#, Java etc, when you access arrays.

Update: I have tried the same executable program out on several additional computers, which throws a lot more light onto what is happening. I've listed the original computer first, and second my modern laptop. On my modern laptop, adding additional checks in the loop adds only between 1 and 4% to the time taken, compared to between 3 and 30% for the original hardware.

Processor   x86 Family 6 Model 30 Stepping 5 GenuineIntel ~2793 Mhz
Ratio 2 checks : 1 check = 1.0310
Ratio 3 checks : 1 check = 1.2769

Processor   Intel(R) Core(TM) i7-3610QM CPU @ 2.30GHz, 2301 Mhz, 4 Core(s), 8 Logical Processor(s)
Ratio 2 checks : 1 check = 1.0090
Ratio 3 checks : 1 check = 1.0393

Processor   Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz, 4 Cores(s)
Ratio 2 checks : 1 check = 1.0035
Ratio 3 checks : 1 check = 1.0639

Processor   Intel(R) Core(TM)2 Duo CPU     T9300  @ 2.50GHz, 2501 Mhz, 2 Core(s), 2 Logical Processor(s)
Ratio 2 checks : 1 check = 1.1195
Ratio 3 checks : 1 check = 1.3597

Processor   x86 Family 15 Model 43 Stepping 1 AuthenticAMD ~2010 Mhz
Ratio 2 checks : 1 check = 1.0776
Ratio 3 checks : 1 check = 1.1451

In the test program, below, the first function checks just one bound, the second function checks two, and the third checks three (in the calling code, n1=n2=n3). I found that the ratio two checks:one was about 1.03, and the ratio three checks:one was about 1.3. I was surprised by that adding one more check made such a difference to performance. I got an interesting answer concerning the low cost of bounds checking on modern processors to my original question, which may throw some light on the differences observed here.

Note that it's important to compile the program without whole program optimization turned on; otherwise the compiler can simply remove the additional bounds checking.

// dotprod.cpp
#include "dotprod.h"

double SumProduct(const double* v1, const double* v2, int n)
{
    double sum=0;
    for(int i=0;
        i<n;
        ++i)
        sum += v1[i]*v2[i];
    return sum;
}

double SumProduct(const double* v1, const double* v2, int n1, int n2)
{
    double sum=0;
    for(int i=0;
        i<n1 && i <n2;
        ++i)
        sum += v1[i]*v2[i];
    return sum;
}

double SumProduct(const double* v1, const double* v2, int n1, int n2, int n3)
{
    double sum=0;
    for(int i=0;
        i<n1 && i <n2 && i <n3;
        ++i)
        sum += v1[i]*v2[i];
    return sum;
}

This code was originally built using Visual Studio 2010, Release, Win32 (I've added the 'C' tag because the reasoning behind the difference in speed is not likely to be C++ specific, and may not be Windows specific). Can anyone explain it?

Rest of the code below, for information. This has some C++ specific stuff in it.

Header file

// dotprod.h
double SumProduct(const double*, const double*, int n);
double SumProduct(const double*, const double*, int n1, int n2);
double SumProduct(const double*, const double*, int n1, int n2, int n3);

Test harness

// main.cpp

#include <stdio.h>
#include <math.h>
#include <numeric>
#include <vector>

#include <windows.h>

#include "../dotprod/dotprod.h" // separate lib

typedef __int64 timecount_t;
inline timecount_t GetTimeCount()
{
    LARGE_INTEGER li;
    if (!QueryPerformanceCounter(&li)) {
        exit(1);
    }
    return li.QuadPart;
}

int main()
{
    typedef std::vector<double> dvec;
    const int N  = 100 * 1000;

    // Initialize
    dvec v1(N);
    dvec v2(N);
    dvec dp1(N);
    dvec dp2(N);
    dvec dp3(N);
    for(int i=0; i<N; ++i) {
        v1[i] = i;
        v2[i] = log(static_cast<double>(i+1));
    }

    const timecount_t t0 = GetTimeCount();

    // Check cost with one bound
    for(int n=0; n<N; ++n) {
        dp1[n] = SumProduct(&(v1[0]),&(v2[0]),n); 
    }

    const timecount_t t1 = GetTimeCount();

    // Check cost with two bounds
    for(int n=0; n<N; ++n) {
        dp2[n] = SumProduct(&(v1[0]),&(v2[0]),n,n); 
    }

    const timecount_t t2 = GetTimeCount();

    // Check cost with three bounds
    for(int n=0; n<N; ++n) {
        dp3[n] = SumProduct(&(v1[0]),&(v2[0]),n,n,n); 
    }
    const timecount_t t3 = GetTimeCount();

    // Check results
    const double sumSumProducts1 = std::accumulate(dp1.begin(), dp1.end(), 0.0);
    const double sumSumProducts2 = std::accumulate(dp2.begin(), dp2.end(), 0.0);
    const double sumSumProducts3 = std::accumulate(dp3.begin(), dp3.end(), 0.0);
    printf("Sums of dot products: %.1f, %.1f, %.1f\n", sumSumProducts1, sumSumProducts2, sumSumProducts3);

    // Output timings
    const timecount_t elapsed1 = t1-t0;
    const timecount_t elapsed2 = t2-t1;
    const timecount_t elapsed3 = t3-t2;
    printf("Elapsed: %.0f, %.0f, %.0f\n",
        static_cast<double>(elapsed1),
        static_cast<double>(elapsed2),
        static_cast<double>(elapsed3));
    const double ratio2to1 = elapsed2 / static_cast<double>(elapsed1);
    const double ratio3to1 = elapsed3 / static_cast<double>(elapsed1);
    printf("Ratio 2:1=%.2f\n", ratio2to1);
    printf("Ratio 3:1=%.2f\n", ratio3to1);

    return 0;
}

In order to produce assembly, I took the advice in this answer (case 2, turning off whole program optimization), producing the following asm file.

; Listing generated by Microsoft (R) Optimizing Compiler Version 16.00.40219.01 

    TITLE   C:\dev\TestSpeed\dotprod\dotprod.cpp
    .686P
    .XMM
    include listing.inc
    .model  flat

INCLUDELIB OLDNAMES

PUBLIC  __real@0000000000000000
PUBLIC  ?SumProduct@@YANPBN0HHH@Z           ; SumProduct
EXTRN   __fltused:DWORD
;   COMDAT __real@0000000000000000
; File c:\dev\testspeed\dotprod\dotprod.cpp
CONST   SEGMENT
__real@0000000000000000 DQ 00000000000000000r   ; 0
; Function compile flags: /Ogtp
CONST   ENDS
;   COMDAT ?SumProduct@@YANPBN0HHH@Z
_TEXT   SEGMENT
tv491 = -4                      ; size = 4
_v1$ = 8                        ; size = 4
_v2$ = 12                       ; size = 4
_n1$ = 16                       ; size = 4
_n2$ = 20                       ; size = 4
_n3$ = 24                       ; size = 4
?SumProduct@@YANPBN0HHH@Z PROC              ; SumProduct, COMDAT

; 25   : {

    push    ebp
    mov ebp, esp
    push    ecx

; 26   :     double sum=0;

    fldz
    push    ebx
    mov ebx, DWORD PTR _v2$[ebp]
    push    esi
    push    edi
    mov edi, DWORD PTR _n1$[ebp]

; 27   :     for(int i=0;

    xor ecx, ecx

; 28   :         i<n1 && i <n2 && i <n3;
; 29   :         ++i)

    cmp edi, 4
    jl  $LC8@SumProduct

; 26   :     double sum=0;

    mov edi, DWORD PTR _v1$[ebp]
    lea esi, DWORD PTR [edi+24]

; 30   :         sum += v1[i]*v2[i];

    sub edi, ebx
    lea edx, DWORD PTR [ecx+2]
    lea eax, DWORD PTR [ebx+8]
    mov DWORD PTR tv491[ebp], edi
$LN15@SumProduct:

; 28   :         i<n1 && i <n2 && i <n3;
; 29   :         ++i)

    mov ebx, DWORD PTR _n2$[ebp]
    cmp ecx, ebx
    jge $LN9@SumProduct
    cmp ecx, DWORD PTR _n3$[ebp]
    jge $LN9@SumProduct

; 30   :         sum += v1[i]*v2[i];

    fld QWORD PTR [eax-8]
    lea edi, DWORD PTR [edx-1]
    fmul    QWORD PTR [esi-24]
    faddp   ST(1), ST(0)
    cmp edi, ebx
    jge SHORT $LN9@SumProduct

; 28   :         i<n1 && i <n2 && i <n3;
; 29   :         ++i)

    cmp edi, DWORD PTR _n3$[ebp]
    jge SHORT $LN9@SumProduct

; 30   :         sum += v1[i]*v2[i];

    mov edi, DWORD PTR tv491[ebp]
    fld QWORD PTR [edi+eax]
    fmul    QWORD PTR [eax]
    faddp   ST(1), ST(0)
    cmp edx, ebx
    jge SHORT $LN9@SumProduct

; 28   :         i<n1 && i <n2 && i <n3;
; 29   :         ++i)

    cmp edx, DWORD PTR _n3$[ebp]
    jge SHORT $LN9@SumProduct

; 30   :         sum += v1[i]*v2[i];

    fld QWORD PTR [eax+8]
    lea edi, DWORD PTR [edx+1]
    fmul    QWORD PTR [esi-8]
    faddp   ST(1), ST(0)
    cmp edi, ebx
    jge SHORT $LN9@SumProduct

; 28   :         i<n1 && i <n2 && i <n3;
; 29   :         ++i)

    cmp edi, DWORD PTR _n3$[ebp]
    jge SHORT $LN9@SumProduct

; 30   :         sum += v1[i]*v2[i];

    fld QWORD PTR [eax+16]
    mov edi, DWORD PTR _n1$[ebp]
    fmul    QWORD PTR [esi]
    add ecx, 4
    lea ebx, DWORD PTR [edi-3]
    add eax, 32                 ; 00000020H
    add esi, 32                 ; 00000020H
    faddp   ST(1), ST(0)
    add edx, 4
    cmp ecx, ebx
    jl  SHORT $LN15@SumProduct
    mov ebx, DWORD PTR _v2$[ebp]
$LC8@SumProduct:

; 28   :         i<n1 && i <n2 && i <n3;
; 29   :         ++i)

    cmp ecx, edi
    jge SHORT $LN9@SumProduct
    mov edx, DWORD PTR _v1$[ebp]
    lea eax, DWORD PTR [ebx+ecx*8]
    sub edx, ebx
$LC3@SumProduct:
    cmp ecx, DWORD PTR _n2$[ebp]
    jge SHORT $LN9@SumProduct
    cmp ecx, DWORD PTR _n3$[ebp]
    jge SHORT $LN9@SumProduct

; 30   :         sum += v1[i]*v2[i];

    fld QWORD PTR [eax+edx]
    inc ecx
    fmul    QWORD PTR [eax]
    add eax, 8
    faddp   ST(1), ST(0)
    cmp ecx, edi
    jl  SHORT $LC3@SumProduct
$LN9@SumProduct:

; 31   :     return sum;
; 32   : }

    pop edi
    pop esi
    pop ebx
    mov esp, ebp
    pop ebp
    ret 0
?SumProduct@@YANPBN0HHH@Z ENDP              ; SumProduct
_TEXT   ENDS
PUBLIC  ?SumProduct@@YANPBN0HH@Z            ; SumProduct
; Function compile flags: /Ogtp
;   COMDAT ?SumProduct@@YANPBN0HH@Z
_TEXT   SEGMENT
tv448 = -4                      ; size = 4
_v1$ = 8                        ; size = 4
_v2$ = 12                       ; size = 4
_n1$ = 16                       ; size = 4
_n2$ = 20                       ; size = 4
?SumProduct@@YANPBN0HH@Z PROC               ; SumProduct, COMDAT

; 15   : {

    push    ebp
    mov ebp, esp
    push    ecx

; 16   :     double sum=0;

    fldz
    push    ebx
    mov ebx, DWORD PTR _v2$[ebp]
    push    esi
    push    edi
    mov edi, DWORD PTR _n1$[ebp]

; 17   :     for(int i=0;

    xor ecx, ecx

; 18   :         i<n1 && i <n2;
; 19   :         ++i)

    cmp edi, 4
    jl  SHORT $LC8@SumProduct@2

; 16   :     double sum=0;

    mov edi, DWORD PTR _v1$[ebp]
    lea edx, DWORD PTR [edi+24]

; 20   :         sum += v1[i]*v2[i];

    sub edi, ebx
    lea esi, DWORD PTR [ecx+2]
    lea eax, DWORD PTR [ebx+8]
    mov DWORD PTR tv448[ebp], edi
$LN19@SumProduct@2:
    mov edi, DWORD PTR _n2$[ebp]
    cmp ecx, edi
    jge SHORT $LN9@SumProduct@2
    fld QWORD PTR [eax-8]
    lea ebx, DWORD PTR [esi-1]
    fmul    QWORD PTR [edx-24]
    faddp   ST(1), ST(0)
    cmp ebx, edi
    jge SHORT $LN9@SumProduct@2
    mov ebx, DWORD PTR tv448[ebp]
    fld QWORD PTR [ebx+eax]
    fmul    QWORD PTR [eax]
    faddp   ST(1), ST(0)
    cmp esi, edi
    jge SHORT $LN9@SumProduct@2
    fld QWORD PTR [eax+8]
    lea ebx, DWORD PTR [esi+1]
    fmul    QWORD PTR [edx-8]
    faddp   ST(1), ST(0)
    cmp ebx, edi
    jge SHORT $LN9@SumProduct@2
    fld QWORD PTR [eax+16]
    mov edi, DWORD PTR _n1$[ebp]
    fmul    QWORD PTR [edx]
    add ecx, 4
    lea ebx, DWORD PTR [edi-3]
    add eax, 32                 ; 00000020H
    add edx, 32                 ; 00000020H
    faddp   ST(1), ST(0)
    add esi, 4
    cmp ecx, ebx
    jl  SHORT $LN19@SumProduct@2
    mov ebx, DWORD PTR _v2$[ebp]
$LC8@SumProduct@2:

; 18   :         i<n1 && i <n2;
; 19   :         ++i)

    cmp ecx, edi
    jge SHORT $LN9@SumProduct@2
    mov edx, DWORD PTR _v1$[ebp]
    lea eax, DWORD PTR [ebx+ecx*8]
    sub edx, ebx
$LC3@SumProduct@2:
    cmp ecx, DWORD PTR _n2$[ebp]
    jge SHORT $LN9@SumProduct@2

; 20   :         sum += v1[i]*v2[i];

    fld QWORD PTR [eax+edx]
    inc ecx
    fmul    QWORD PTR [eax]
    add eax, 8
    faddp   ST(1), ST(0)
    cmp ecx, edi
    jl  SHORT $LC3@SumProduct@2
$LN9@SumProduct@2:

; 21   :     return sum;
; 22   : }

    pop edi
    pop esi
    pop ebx
    mov esp, ebp
    pop ebp
    ret 0
?SumProduct@@YANPBN0HH@Z ENDP               ; SumProduct
_TEXT   ENDS
PUBLIC  ?SumProduct@@YANPBN0H@Z             ; SumProduct
; Function compile flags: /Ogtp
;   COMDAT ?SumProduct@@YANPBN0H@Z
_TEXT   SEGMENT
_v1$ = 8                        ; size = 4
_v2$ = 12                       ; size = 4
?SumProduct@@YANPBN0H@Z PROC                ; SumProduct, COMDAT
; _n$ = eax

; 5    : {

    push    ebp
    mov ebp, esp
    mov edx, DWORD PTR _v2$[ebp]

; 6    :     double sum=0;

    fldz
    push    ebx
    push    esi
    mov esi, eax

; 7    :     for(int i=0;

    xor ebx, ebx
    push    edi
    mov edi, DWORD PTR _v1$[ebp]

; 8    :         i<n;
; 9    :         ++i)

    cmp esi, 4
    jl  SHORT $LC9@SumProduct@3

; 6    :     double sum=0;

    lea eax, DWORD PTR [edx+8]
    lea ecx, DWORD PTR [edi+24]

; 10   :         sum += v1[i]*v2[i];

    sub edi, edx
    lea edx, DWORD PTR [esi-4]
    shr edx, 2
    inc edx
    lea ebx, DWORD PTR [edx*4]
$LN10@SumProduct@3:
    fld QWORD PTR [eax-8]
    add eax, 32                 ; 00000020H
    fmul    QWORD PTR [ecx-24]
    add ecx, 32                 ; 00000020H
    dec edx
    faddp   ST(1), ST(0)
    fld QWORD PTR [edi+eax-32]
    fmul    QWORD PTR [eax-32]
    faddp   ST(1), ST(0)
    fld QWORD PTR [eax-24]
    fmul    QWORD PTR [ecx-40]
    faddp   ST(1), ST(0)
    fld QWORD PTR [eax-16]
    fmul    QWORD PTR [ecx-32]
    faddp   ST(1), ST(0)
    jne SHORT $LN10@SumProduct@3

; 6    :     double sum=0;

    mov edx, DWORD PTR _v2$[ebp]
    mov edi, DWORD PTR _v1$[ebp]
$LC9@SumProduct@3:

; 8    :         i<n;
; 9    :         ++i)

    cmp ebx, esi
    jge SHORT $LN8@SumProduct@3
    sub edi, edx
    lea eax, DWORD PTR [edx+ebx*8]
    sub esi, ebx
$LC3@SumProduct@3:

; 10   :         sum += v1[i]*v2[i];

    fld QWORD PTR [eax+edi]
    add eax, 8
    dec esi
    fmul    QWORD PTR [eax-8]
    faddp   ST(1), ST(0)
    jne SHORT $LC3@SumProduct@3
$LN8@SumProduct@3:

; 11   :     return sum;
; 12   : }

    pop edi
    pop esi
    pop ebx
    pop ebp
    ret 0
?SumProduct@@YANPBN0H@Z ENDP                ; SumProduct
_TEXT   ENDS
END
Community
  • 1
  • 1
TooTone
  • 7,129
  • 5
  • 34
  • 60
  • 4
    `void main()` is not legal C++. I suggest using `int main()`. I also suggest using `std::vector` instead of trying to manage memory yourself. – chris Jun 12 '13 at 16:29
  • 1
    It may be quite interesting to see the result of your tests on a compiler capable of optimizing for a processor. Such as 'gcc -march=native'. Do you have a chance to compare this? –  Jun 12 '13 at 16:31
  • 4
    GetTickCount is not precise, use QueryPerformanceCounter: http://stackoverflow.com/questions/1739259/how-to-use-queryperformancecounter – Alex F Jun 12 '13 at 16:31
  • 2
    Anyway, may I suggest that you just do something like `int stop = min(n1, min(n2, n3));` before the loop, then use `i – gx_ Jun 12 '13 at 16:34
  • 2
    You should show the generated asm. Probably the compiler knows that i – Marc Glisse Jun 12 '13 at 16:35
  • Begs the question, "why do you think you need bounds checking?" – John Dibling Jun 12 '13 at 16:39
  • @Chris this is not production code, especially main.cpp – TooTone Jun 12 '13 at 18:58
  • @gx_ Thanks but I'm trying to simulate code, e.g. from C#, where the run time system inserts extra checks on access to containers. This is sometimes cited as a reason why C/C++ is faster for low level algorithms. – TooTone Jun 12 '13 at 19:04
  • @TooTone, None of my code is production code. It's not exactly a hard habit to get into in my experience, and definitely worth it. – chris Jun 12 '13 at 19:36
  • @chris point taken. Do you have any thoughts about the performance in the meantime? There are some nice comments here about compiler optimizations, gcc, inaccuracy of GetTickCount, and I thought you might have something to add? – TooTone Jun 12 '13 at 19:52
  • I'm not usually into optimization specifics, but I can say that you can use `` as an alternative to `QueryPerformanceCounter`. – chris Jun 12 '13 at 20:02
  • @Alex thanks, I will convert to `QueryPerformanceCounter` within the next few days. I have a nagging feeling I've done something silly, and taking a shortcut and using `GetTickCount` might be it. – TooTone Jun 12 '13 at 22:11
  • Using Visual Studio 2008 with QueryPerformanceCounter, not timing the memory allocations, and /arch:SSE2 set, all 3 are very similar with the second and third sometimes beating the first. – Retired Ninja Jun 13 '13 at 03:25
  • 3
    Note that x86 has very few registers, so the big jump nay be due to register spilling. – Raymond Chen Jun 13 '13 at 03:25
  • @MarcGlisse that's a really good point; compiler optimizations always sound cool when you haven't come across them before. I have uploaded the asm, and also tidied up the main.cpp as advised by the comments above. – TooTone Jun 14 '13 at 11:27
  • @RetiredNinja thanks, I tidied up the code today and got rid of the embarrassing allocation within the timing loop but I still get similar numbers. – TooTone Jun 14 '13 at 13:02
  • @RetiredNinja I found that with whole program optimization enabled on my most modern hardware (on visual studio 2012), the runtimes were virtually identical. However whole program optimization defeated the purpose of the test (because it allows inlining the code and the compiler can directly use n1=n2=n3). When I disabled this, I found that the runtimes were about 1% slower for 2 checks and 4% slower for 3 checks. I.e. on my modern laptop, the extra checks make v little difference. – TooTone Jun 17 '13 at 14:00
  • You should try a taste-test comparison against C#. .Net installs a framework that is supposedly locally optimized to the processor in your machine. Since your code is generic 32-bit ASM, it can't take advantage of extended instruction sets available in more modern processors. Theoretically, the performance enhancements gained can offset the cost of the just-in-time compilation over an extended run. – John Deters Jun 17 '13 at 14:44
  • I suspect the major differences are a mix of 1) compiler, 2) version of compiler, 3) C run time library, 4) version of C run time library, 5) compiler options and optimizations used in compiling C run time library, 6) compiler options and optimizations used in building test code, 7) system architecture, including processor, cache hierarchy, and underlying instruction set architecture, 8) system load and conflicts over cache and other resources, 9) testing methodology, 10) expectations, and 11) other stuff... – twalberg Jun 17 '13 at 15:07
  • @twalberg re 1 through 6, I statically linked the C run-time library (and I turned on options to enable a VS2012-built exe to run on XP as some of the machines were running XP). Re 8 and 9, I ran several times, not under load, and gave indicative results. The numbers indicate pretty strongly that it is 7 (system architecture). – TooTone Jun 17 '13 at 15:14
  • @JohnDeters A very interesting point. I have heard arguments for and against the ability of JIT compiling to take advantage of particular architectures (the arguments against are the ability of the C/C++ compiler to take as long as it likes over its optimizations). Time allowing I'll give that a try. – TooTone Jun 17 '13 at 15:20
  • 1
    @TooTone, I share your curiosity of needing to understand the performance question. However, isolating one specific safety check or optimization is almost always a mistake. Study the overall performance of the application to understand if the total of optimizations and safety features are helping or hindering you. While this test might create a 0.1% performance slowdown, something else might provide a 20% speedup or 50% slowdown. In addition, consider the inefficiency of crashing due to an unchecked out-of-bounds bug in C++. Manually handling one bug report wipes out all CPU efficiency gains. – John Deters Jun 17 '13 at 15:59
  • @JohnDeters I broadly agree. With a higher level language there are gains in developer productivity, the ability to optimize from a birds-eye point of view, more easily parallelize algorithms, etc. What the results I posted today suggest to me is that with modern architectures, e.g. on the i5 and i7, bounds checking is not a terribly serious overhead. And perhaps because absolute clock speed gains are harder to come by, processors are now optimizing more on prediction / keeping the pipeline full (see http://stackoverflow.com/a/16719474/834521)? But, I'm not expert enough to answer definitely. – TooTone Jun 17 '13 at 16:12

1 Answers1

3

One big difference between CPUs is the pipeline optimization

The CPU can execute in parallel several instructions until reaches a conditional branch. From this point instead of waiting until all the instructions are executed, the CPU can continue with a branch in parallel until the condition is available and ready to be evaluated. If the assumption was correct, then we have a gain. Otherwise the CPU will go with the other branch.

So the tricky part for a CPU is to find the best assumptions and to execute as many instructions in parallel as possible.