3

I'm trying to efficiently add everything up in an compile-time sized array, using least amount of instructions. Naturally I'm using templates. I created this.

template<unsigned int startIndex, unsigned int count>
int AddCollapseArray(int theArray[])
{
    if(count == 1)
    {
        return theArray[startIndex];
    }
    else if(count == 2)
    {
        return theArray[startIndex] + theArray[startIndex + 1];
    }
    else if(count % 2 == 0)
    {
        return AddCollapseArray<startIndex, count / 2>(theArray) + AddCollapseArray<startIndex + count / 2, count / 2>(theArray));
    }
    else if (count % 2 == 1)
    {
        int newCount = count-1;
        return AddCollapseArray<startIndex, newCount/ 2>(theArray) + AddCollapseArray<startIndex + newCount/ 2, newCount/ 2>(theArray)) + theArray[startIndex + newCount];
    }
}

This appears like it will get the job done most efficiently to me. I think the branching and the arithmetic besides the additions will be completely optimized out. Are there any flaws with doing it this way?

Thomas
  • 6,032
  • 6
  • 41
  • 79
  • 3
    Is there a special reason you must do it this way? I would use `std::accumulate` and leave it like that unless there is a reason not to. – Neil Kirk Sep 15 '15 at 02:05
  • 2
    If you add up 10 numbers you'll need 9 addition operations. There's no way around that. If you work out your recursion scheme for 10 elements you'll find it will indeed use 9 additions. – roeland Sep 15 '15 at 02:08
  • When I read the stipulation of using "the least number of instructions", I'm thinking in terms of machine language instructions or the closest equivalent in C or C++, is that the assignment? Further, you state the size of the array is known at compile time, but are the VALUES in the array also known at compile time? In the latter you can use simple meta programming to compute during compilation, resulting in zero instructions at runtime, but only when the VALUES are known at compile time. – JVene Sep 15 '15 at 02:08
  • @JVene values are not known at compile time. – Thomas Sep 15 '15 at 02:10
  • @Neil Kirk, I want to use my own method. There's no special reason other than I may copy paste the code and use it for other things besides addition. – Thomas Sep 15 '15 at 02:10
  • Ok, then I need to understand your use of the term instructions. "The least instructions" is specific to your inquiry, but does that mean the least machine operations, or something else to you? To me, for decades, the least instructions has always implied the fewest machine language steps (as would result from C or C++ code, for example)...is that the implication? – JVene Sep 15 '15 at 02:13
  • @JVene, yes I do mean machine instructions – Thomas Sep 15 '15 at 02:15
  • 4
    Just using `std::accumulate` results in [identical codegen](http://goo.gl/NQhrgw) on clang and arguably [better codegen](http://goo.gl/87HeQF) on GCC for 10 elements. Moreover, with larger element counts, [both](http://goo.gl/acymj8) [compilers](http://goo.gl/ouqJyt) can vectorize `accumulate`, but not your function. – T.C. Sep 15 '15 at 02:15
  • You can use `std::accumulate` to multiple elements as well, with some clever usage. – Neil Kirk Sep 15 '15 at 02:16
  • IMO, T.C.'s comment should be the answer. At least I can't think of any other significant flaws of doing it the way OP suggests. – Chris Beck Sep 15 '15 at 02:28
  • Incidentally, if you use `template int myfunction(const int (&array)[N])`, you can deduce the array size rather than requiring the user to pass it in. –  Sep 15 '15 at 08:47

3 Answers3

5

Don't try to outsmart the optimizer. All this complicated template machinery just makes it harder for the optimizer to understand what you actually want to do.

For example,

int f0(int *p) {
  return AddCollapseArray<0, 10>(p);
}

int f1(int *p) {
  return std::accumulate(p+0, p+10, 0);
}

Produces the exact same assembly with clang at -O3

f0(int*):                                # @f0(int*)
    movl    4(%rdi), %eax
    addl    (%rdi), %eax
    addl    8(%rdi), %eax
    addl    12(%rdi), %eax
    addl    16(%rdi), %eax
    addl    20(%rdi), %eax
    addl    24(%rdi), %eax
    addl    28(%rdi), %eax
    addl    32(%rdi), %eax
    addl    36(%rdi), %eax
    retq

f1(int*):                                # @f1(int*)
    movl    4(%rdi), %eax
    addl    (%rdi), %eax
    addl    8(%rdi), %eax
    addl    12(%rdi), %eax
    addl    16(%rdi), %eax
    addl    20(%rdi), %eax
    addl    24(%rdi), %eax
    addl    28(%rdi), %eax
    addl    32(%rdi), %eax
    addl    36(%rdi), %eax
    retq

Let's say we want to do 100 elements:

int f0(int *p) {
  return AddCollapseArray<0, 100>(p);
}

int f1(int *p) {
  return std::accumulate(p+0, p+100, 0);
}

Here's what we get:

f0(int*):                                # @f0(int*)
    pushq   %rbp
    pushq   %rbx
    pushq   %rax
    movq    %rdi, %rbx
    callq   int AddCollapseArray<0u, 50u>(int*)
    movl    %eax, %ebp
    movq    %rbx, %rdi
    callq   int AddCollapseArray<50u, 50u>(int*)
    addl    %ebp, %eax
    addq    $8, %rsp
    popq    %rbx
    popq    %rbp
    retq

f1(int*):                                # @f1(int*)
    movdqu  (%rdi), %xmm0
    movdqu  16(%rdi), %xmm1
    movdqu  32(%rdi), %xmm2
    movdqu  48(%rdi), %xmm3
    paddd   %xmm0, %xmm1
    paddd   %xmm2, %xmm1
    paddd   %xmm3, %xmm1
    movdqu  64(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  80(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  96(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  112(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  128(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  144(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  160(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  176(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  192(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  208(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  224(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  240(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  256(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  272(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  288(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  304(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  320(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  336(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  352(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    movdqu  368(%rdi), %xmm1
    paddd   %xmm0, %xmm1
    movdqu  384(%rdi), %xmm0
    paddd   %xmm1, %xmm0
    pshufd  $78, %xmm0, %xmm1       # xmm1 = xmm0[2,3,0,1]
    paddd   %xmm0, %xmm1
    pshufd  $229, %xmm1, %xmm0      # xmm0 = xmm1[1,1,2,3]
    paddd   %xmm1, %xmm0
    movd    %xmm0, %eax
    retq

int AddCollapseArray<0u, 50u>(int*):     # @int AddCollapseArray<0u, 50u>(int*)
    movl    4(%rdi), %eax
    addl    (%rdi), %eax
    addl    8(%rdi), %eax
    addl    12(%rdi), %eax
    addl    16(%rdi), %eax
    addl    20(%rdi), %eax
    addl    24(%rdi), %eax
    addl    28(%rdi), %eax
    addl    32(%rdi), %eax
    addl    36(%rdi), %eax
    addl    40(%rdi), %eax
    addl    44(%rdi), %eax
    addl    48(%rdi), %eax
    addl    52(%rdi), %eax
    addl    56(%rdi), %eax
    addl    60(%rdi), %eax
    addl    64(%rdi), %eax
    addl    68(%rdi), %eax
    addl    72(%rdi), %eax
    addl    76(%rdi), %eax
    addl    80(%rdi), %eax
    addl    84(%rdi), %eax
    addl    88(%rdi), %eax
    addl    92(%rdi), %eax
    addl    96(%rdi), %eax
    addl    100(%rdi), %eax
    addl    104(%rdi), %eax
    addl    108(%rdi), %eax
    addl    112(%rdi), %eax
    addl    116(%rdi), %eax
    addl    120(%rdi), %eax
    addl    124(%rdi), %eax
    addl    128(%rdi), %eax
    addl    132(%rdi), %eax
    addl    136(%rdi), %eax
    addl    140(%rdi), %eax
    addl    144(%rdi), %eax
    addl    148(%rdi), %eax
    addl    152(%rdi), %eax
    addl    156(%rdi), %eax
    addl    160(%rdi), %eax
    addl    164(%rdi), %eax
    addl    168(%rdi), %eax
    addl    172(%rdi), %eax
    addl    176(%rdi), %eax
    addl    180(%rdi), %eax
    addl    184(%rdi), %eax
    addl    188(%rdi), %eax
    addl    192(%rdi), %eax
    addl    196(%rdi), %eax
    retq

int AddCollapseArray<50u, 50u>(int*):    # @int AddCollapseArray<50u, 50u>(int*)
    movl    204(%rdi), %eax
    addl    200(%rdi), %eax
    addl    208(%rdi), %eax
    addl    212(%rdi), %eax
    addl    216(%rdi), %eax
    addl    220(%rdi), %eax
    addl    224(%rdi), %eax
    addl    228(%rdi), %eax
    addl    232(%rdi), %eax
    addl    236(%rdi), %eax
    addl    240(%rdi), %eax
    addl    244(%rdi), %eax
    addl    248(%rdi), %eax
    addl    252(%rdi), %eax
    addl    256(%rdi), %eax
    addl    260(%rdi), %eax
    addl    264(%rdi), %eax
    addl    268(%rdi), %eax
    addl    272(%rdi), %eax
    addl    276(%rdi), %eax
    addl    280(%rdi), %eax
    addl    284(%rdi), %eax
    addl    288(%rdi), %eax
    addl    292(%rdi), %eax
    addl    296(%rdi), %eax
    addl    300(%rdi), %eax
    addl    304(%rdi), %eax
    addl    308(%rdi), %eax
    addl    312(%rdi), %eax
    addl    316(%rdi), %eax
    addl    320(%rdi), %eax
    addl    324(%rdi), %eax
    addl    328(%rdi), %eax
    addl    332(%rdi), %eax
    addl    336(%rdi), %eax
    addl    340(%rdi), %eax
    addl    344(%rdi), %eax
    addl    348(%rdi), %eax
    addl    352(%rdi), %eax
    addl    356(%rdi), %eax
    addl    360(%rdi), %eax
    addl    364(%rdi), %eax
    addl    368(%rdi), %eax
    addl    372(%rdi), %eax
    addl    376(%rdi), %eax
    addl    380(%rdi), %eax
    addl    384(%rdi), %eax
    addl    388(%rdi), %eax
    addl    392(%rdi), %eax
    addl    396(%rdi), %eax
    retq

Not only is your function not fully inlined, it's also not vectorized. GCC produces similar results.

T.C.
  • 133,968
  • 17
  • 288
  • 421
1

The important qualifier here is the meaning of "least number of instructions". If that is to be interpreted as causing the CPU to perform the fewest steps, and we further stipulate there are no advanced techniques to be employed, like SIMD, GPU programming or OMP (or other auto parallel technologies)....just C or C++, then consider:

Assuming something like:

int a[ 10 ];

Which is filled with data at runtime, and will always contain 10 entries (0 through 9)

The std::accumulate does a nice job here, creating a tight loop in the assembler, no mess...just quick:

int r = std::accumulate( &a[ 0 ], &a[ 9 ], 0 );

If course, some const int signifying the size of the array 'a' would be in order.

This compares curiously to:

for( int n=0; n < 10; ++n ) r += a[ n ];

The compiler very smartly emits 10 add instructions unrolled - it doesn't even bother with a loop.

Now, this means that in std::accumulate, though the loop is tight, there will be, at the minimum, two add instructions for each element (one for the sum, and one to increment the iterator). Add to that the comparison instruction and a conditional jump, and there are at least 4 instructions per item, or about 40 machine language steps of various cost in ticks.

On the other hand, the unrolled result of the for loop is just 10 machine steps, which the CPU can very likely schedule with great cache friendliness, and no jumps.

The for loop is definitely faster.

The compiler "knows" what you're trying to do, and gets to the job as well as you might think through it with the proposed code you posted.

Further, if the size of the array gets too outlandish for unrolling the loop, the compiler automatically performs the classic optimization that std::accumulate does not appear to do for some reason...i.e., performing two additions per loop (when it constructs a loop for reason of the number of elements).

Using VC 2012, this source:

 int r = std::accumulate( &a[ 0 ], &a[ 9 ], 0 );

 int z = 0;

 int *ap = a;
 int *ae = &a[9];
 while( ap <= ae ) { z += *ap; ++ap; }

 int z2 = 0;

 for (int n=0; n < 10; ++n ) z2 += a[ n ];

Produces the following assembler snippets on a release build in VC2012

int r = std::accumulate( &a[ 0 ], &a[ 9 ], 0 );
00301270 33 D2                xor         edx,edx  
00301272 B8 D4 40 30 00       mov         eax,3040D4h  
00301277 EB 07                jmp         wmain+10h (0301280h)  
00301279 8D A4 24 00 00 00 00 lea         esp,[esp]  
00301280 03 10                add         edx,dword ptr [eax]  
00301282 83 C0 04             add         eax,4  
00301285 3D F8 40 30 00       cmp         eax,3040F8h  
0030128A 75 F4                jne         wmain+10h (0301280h) 

while( ap <= ae ) { z += *ap; ++ap; }
003012A0 03 08                add         ecx,dword ptr [eax]  
003012A2 03 70 04             add         esi,dword ptr [eax+4]  
003012A5 83 C0 08             add         eax,8  
003012A8 3D F4 40 30 00       cmp         eax,3040F4h  
003012AD 7E F1                jle         wmain+30h (03012A0h)  
003012AF 3D F8 40 30 00       cmp         eax,3040F8h  
003012B4 77 02                ja          wmain+48h (03012B8h)  
003012B6 8B 38                mov         edi,dword ptr [eax]  
003012B8 8D 04 0E             lea         eax,[esi+ecx]  
003012BB 03 F8                add         edi,eax  


for (int n=0; n < 10; ++n ) z2 += a[ n ];
003012BD A1 D4 40 30 00       mov         eax,dword ptr ds:[003040D4h]  
003012C2 03 05 F8 40 30 00    add         eax,dword ptr ds:[3040F8h]  
003012C8 03 05 D8 40 30 00    add         eax,dword ptr ds:[3040D8h]  
003012CE 03 05 DC 40 30 00    add         eax,dword ptr ds:[3040DCh]  
003012D4 03 05 E0 40 30 00    add         eax,dword ptr ds:[3040E0h]  
003012DA 03 05 E4 40 30 00    add         eax,dword ptr ds:[3040E4h]  
003012E0 03 05 E8 40 30 00    add         eax,dword ptr ds:[3040E8h]  
003012E6 03 05 EC 40 30 00    add         eax,dword ptr ds:[3040ECh]  
003012EC 03 05 F0 40 30 00    add         eax,dword ptr ds:[3040F0h]  
003012F2 03 05 F4 40 30 00    add         eax,dword ptr ds:[3040F4h]  

Based on comments I decided to try this in XCode 7, with drastically different results. This is it's unroll of the for loop:

    .loc    1 58 36                 ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:36
    movq    _a(%rip), %rax
Ltmp22:
    ##DEBUG_VALUE: do3:z2 <- EAX
    movq    %rax, %rcx
    shrq    $32, %rcx
    .loc    1 58 33 is_stmt 0       ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:33
    addl    %eax, %ecx
    .loc    1 58 36                 ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:36
    movq    _a+8(%rip), %rax
Ltmp23:
    .loc    1 58 33                 ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:33
    movl    %eax, %edx
    addl    %ecx, %edx
    shrq    $32, %rax
    addl    %edx, %eax
    .loc    1 58 36                 ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:36
    movq    _a+16(%rip), %rcx
    .loc    1 58 33                 ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:33
    movl    %ecx, %edx
    addl    %eax, %edx
    shrq    $32, %rcx
    addl    %edx, %ecx
    .loc    1 58 36                 ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:36
    movq    _a+24(%rip), %rax
    .loc    1 58 33                 ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:33
    movl    %eax, %edx
    addl    %ecx, %edx
    shrq    $32, %rax
    addl    %edx, %eax
    .loc    1 58 36                 ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:36
    movq    _a+32(%rip), %rcx
    .loc    1 58 33                 ## /Users/jv/testclang/testcp/checkloop/checkloop/main.cpp:58:33
    movl    %ecx, %edx
    addl    %eax, %edx
    shrq    $32, %rcx
    addl    %edx, %ecx

This may not look as clean as VC's simple list, but it may run as fast because the setup (movq or movl) for each addition may run parallel in the CPU as the previous entry is finishing it's addition, costing little to nothing by comparison to VC's simple, clean 'looking' series of adds on memory sources.

The following is Xcode's std::accumulator. It SEEMS there's a init required, but then it performs a clean series of additions having unrolled the loop, which VC did not do.

    .file   37 "/Applications/Xcode7.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1" "numeric"
    .loc    37 75 27 is_stmt 1      ## /Applications/Xcode7.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/numeric:75:27
    movq    _a(%rip), %r14
Ltmp11:
    movq    %r14, -48(%rbp)         ## 8-byte Spill
Ltmp12:
    shrq    $32, %r14
    movq    _a+8(%rip), %rbx
    movq    %rbx, -56(%rbp)         ## 8-byte Spill
    shrq    $32, %rbx
    movq    _a+16(%rip), %r13
    movq    %r13, -72(%rbp)         ## 8-byte Spill
    shrq    $32, %r13
    movq    _a+24(%rip), %r15
    movq    %r15, %r12
    shrq    $32, %r12
Ltmp13:
    movl    _a+32(%rip), %eax
Ltmp14:
    movq    -48(%rbp), %rax         ## 8-byte Reload
    addl    %eax, %r14d
    movq    -56(%rbp), %rax         ## 8-byte Reload
    addl    %eax, %r14d
    addl    %ebx, %r14d
    movq    -72(%rbp), %rax         ## 8-byte Reload
    addl    %eax, %r14d
    addl    %r13d, %r14d
    addl    %r15d, %r14d
    addl    %r12d, %r14d
    addl    -64(%rbp), %r14d        ## 4-byte Folded Reload

The bottom line here is that the optimizations we rely upon from compilers differs so widely and wildly from one compiler to another that we should rely upon them, but watch.

LLVM is quite exemplary, and understands std::accumulate better than VC, it seems - but this short investigation can't reveal if that is a difference in the implementation of the libary or of the compiler. There could be important differences in the implementation of Xcode's std::accumulate which give the compiler more insight than VC's version of the library.

That applies more generally to algorithms, even those from numeric. std::accumulate is a for loop. It is likely expanded inline as for loop based on pointers into the array, which is why VC's choice to create a loop for std::accumulate was echoed in it's choice to produce a loop for the code using int * to loop through the array, but unrolled the loop for the for loop using an integer to reference entries in the array by index. In other words, it really did no better in a straight for loop when pointers were used, and that suggests it's VC's optimizer, not the library, in this case.

This follows Stroustrup's own favorite example of the idea of information available to the compiler, comparing qsort from C and sort from C++. qsort takes a function pointer to perform the comparison, cutting off the compiler from understand the comparison, forcing it to call a function via a pointer. The C++ sort function, on the other hand, takes a functor, which conveys more information about the comparison. That could still result in a function call, but the optimizer has the opportunity to understand the comparison sufficiently to make it inline.

In VC's case, for whatever reason (we'd have to as Microsoft), the compiler is confused when looping through the array via pointers. The information given to it is different than with the loop using an integer to index the array. It understands that, but not the pointers. LLVM, by contrast, understood both (and more). The difference of information is not important to LLVM, but it is to VC. Since std::accumulate is really an inline representing a for loop, and that loop is processed via pointers, it escaped VC's recognition, just as VC did in the straight for loop based on pointers. If a specialization could be made for integer arrays, such that accumulated looped with indexes rather than pointers, VC would respond with better output, but it shouldn't be so.

A poor optimizer can miss the point, and a poor implementation of the library could confuse the optimizer, which means that under the best circumstances std::accumulate can perform about as well as the for loop for a simple array of integers, producing an unrolled version of the loop creating the sum, but not always. However, there's little to get in the way of the compiler's understanding in a for loop..everything is right there, and the implementation of the library can't mess it up, it's all up to the compiler at that point. For that, VC shows it's weakness.

I tried all settings on VC to try to get it to unroll std::accumulate, but so far it never did (haven't tried newer versions of VC).

It didn't take much to get Xcode to unroll the loop; LLVM seems to have deeper engineering. It may have a better implementation of the library, too.

Incidentally, the C code example I posted at top was used in VC, which didn't recognize that the three different summations were related. LLVM on XCode did, which meant the first time I tried it there it simply adopted the answer from std::accumulate and did nothing otherwise. VC was really weak on that point. In order to get Xcode to perform 3 separate tests, I randomized the array before each call...otherwise Xcode realized what I was doing where VC did not.

TemplateRex
  • 69,038
  • 19
  • 164
  • 304
JVene
  • 1,611
  • 10
  • 10
  • I feel dumb for thinking I needed complicated recursion instead of an unrolled for loop – Thomas Sep 15 '15 at 02:37
  • Please don't, I think it may be the reason for the assignment. We used to have stupid compilers...back when I was young, machines with 4 Mbytes of RAM were huge (that's no misprint, megabytes). They could not optimize as they do today...we've learned to trust the compiler to the extent of not bothering with assembler or trying to outthink it. More to the point, conveniences like std::accumulate are good, fast as they can be, but one thing that really matters is how much information the compiler gathers from the context of our code. If that context is lost, optimization is lost. – JVene Sep 15 '15 at 02:39
  • Why do you think the compiler can't see through the iterators in with `std::accumulate`? – T.C. Sep 15 '15 at 02:44
  • That may be implementation specific, but the basic gist is this: it uses iterators. That compares to using an int * to loop through the array. At that point, the compiler looses context, and doesn't realize what you intend. It goes with accumulator's use of the pointers, and creates a loop instead. Same thing happens writing an int * and looping in a while or for to the end of the array. – JVene Sep 15 '15 at 02:48
  • 1
    I have no idea what kind of ancient compiler you are using. No GCC since 4.4 (the oldest version available on godbolt) emits a loop for `int f(int* p) { return std::accumulate(p, p+10, 0); }` – T.C. Sep 15 '15 at 02:56
  • It would be nice to see evidence of your claim that the for loop produced is unrolled by the compiler and not when using std::accumulate. I've done some testing with g++ 5.1 and don't see a difference. – sashang Sep 15 '15 at 03:30
  • Not sure how to post evidence in comments, but I'm using VC 2012 for this check. Simply ran using the debugger on a release build and observed the assembler. Should I open an answer? Oh, I'll edit the answer above – JVene Sep 15 '15 at 03:42
  • You know, I misstated earlier....it's 9 adds with an initializing mov...similar, but... – JVene Sep 15 '15 at 03:51
  • I tried this in Xcode 7 and get very different outputs. Added some to the answer on that observation – JVene Sep 15 '15 at 04:29
  • `std::accumulate(&array[0], &array[9])` doesn't add in the last element of the array. `std::accumulate(begin(array), end(array))` will, though, and doesn't require –  Sep 15 '15 at 08:43
0

Whereas std::accumulate should be enough, to unroll manually the loop, you may do

namespace detail
{
    template<std::size_t startIndex, std::size_t... Is>
    int Accumulate(std::index_sequence<Is...>, const int a[])
    {
        int res = 0;
        const int dummy[] = {0, ((res += a[startIndex + Is]), 0)...};
        static_cast<void>(dummy); // Remove warning for unused variable
        return res;
    }
}

template<std::size_t startIndex, std::size_t count>
int AddCollapseArray(const int a[])
{
    return detail::Accumulate<startIndex>(std::make_index_sequence<count>{}, a);
}

or in C++17, with fold expression:

namespace detail
{
    template<std::size_t startIndex, std::size_t... Is>
    int Accumulate(std::index_sequence<Is...>, const int a[])
    {
        return (a[startIndex + Is] + ...);
    }
}
Jarod42
  • 203,559
  • 14
  • 181
  • 302