4

reduce (aka foldL in FP) is the most general iterative higher order function in Javascript. You can implement, for instance, map or filter in terms of reduce. I've used an imperative loop to better illustrate the algorithm:

const foldL = f => acc => xs => {
  for (let i = 0; i < xs.length; i++) {
    acc = f(acc)(xs[i]);
  }

  return acc;
};

const map = f => xs => {
  return foldL(acc => x => acc.concat([f(x)]))([])(xs);
}

let xs = [1, 2, 3, 4];
const inc = x => ++x;

const result = map(inc)(xs);

console.log(result);  // [2, 3, 4, 5]

But you can't derive some or every from reduce, because both are able to return early.

So how can a even more generalized partial reduce function look like? Until now I've come up with following naive implementation:

const foldLP = f => pred => acc => xs => {
  for (let i = 0, r; i < xs.length; i++) {
    r = pred(i, acc, xs[i]);

    if (r === true) { // normal iteration
      acc = f(acc)(xs[i]);
    } else if (r === false) { // early exit
      break;
    } /* else { // skip iteration
      continue;
    } */
  }

  return acc;
};

const takeN = n => (idx, acc, x) => idx < n;
const append = xs => ys => xs.concat(ys);

let xs = [1,2,3,4,5];
const result = foldLP(append)(takeN(3))([])(xs);

console.log(result); // [1,2,3]

I can also implement map in terms of foldLP:

const always = x => y => x;
const map = f => xs => {
  return foldLP(acc => x => acc.concat([f(x)]))(always(true))([])(xs);
}

map(inc)(xs); // [2,3,4,5,6]

The drawback is obvious: Whenever an early exit mechanism isn't needed, always is invoked unnecessarily. The transforming and early exiting function are composed statically by foldLP and can't be used independently. Is there a more efficient combinator, that enables a generalized Array.prototype.reduce?

If you look at the call stack, the return statement of the reducing function acc => x => acc.concat([f(x)]) would have to skip several stack-frames. This kind of stack manipulation makes me think of continuations. Maybe there is an efficient way to solve this problem in Continuation Passing Style along with an adapted call/cc function - or at least with a generator.

VLAZ
  • 26,331
  • 9
  • 49
  • 67

2 Answers2

4

It has turned out that the generalization of reduce can be quite easily achieved once you have become accustomed to CPS:

const foldL = f => acc => xs => xs.length
 ? f(acc)(xs[0])(xss => foldL(f)(xss)(xs.slice(1)))
 : acc;

const map = f => foldL(acc => x => cont => cont(acc.concat([f(x)])))([]);
const filter = pred => foldL(acc => x => cont => cont(pred(x) ? acc.concat([x]) : acc))([]);
const every = pred => foldL(acc => x => cont => pred(x) ? cont(true) : false)(true);
const some = pred => foldL(acc => x => cont => pred(x) ? true : cont(false))(false);
const takeN = n => foldL(acc => x => cont => acc.length < n ? cont(acc.concat([x])) : acc)([]);

const comp = f => g => x => f(g(x));
const not = f => x => !f(x);
const inc = x => ++x;
const odd = x => x & 1;
const even = not(odd);
const lt3 = x => x < 3;
const eq3 = x => x === 3;
const sqr = x => x * x;
const xs = [1, 2, 3, 4, 5];

map(inc)(xs); // [2, 3, 4, 5, 6]
filter(even)(xs); // [2, 4]
every(lt3)(xs); // false
some(lt3)(xs); // true
takeN(3)(xs); // [1, 2, 3]

// we can compose transforming functions as usual
map(comp(inc)(sqr))(xs); // [2, 5, 10, 17, 26]

// and the reducing functions as well
comp(map(inc))(filter(even))(xs); // [3, 5]
comp(takeN(2))(filter(odd))(xs); // [1, 3]

As you can see this isn't really pure CPS, but mixed with Direct Style. This has the great benefit that foldL and the usual transforming functions have to carry around no additional continuation argument, but maintain their normal signatures.

I use CPS functions only in parts of the code, where they are irreplaceable to achieve the desired behavior. CPS is an extremely powerful construct and you'd always use the least expressive constructs that you can.

comp(takeN(2))(filter(odd))(xs) illustrates one of the weaknesses of the implementation (there will be probably others). The composition of the reducing functions does not take place at the level of the array elements. Thus an intermediate array is required ([1, 3, 5]) before the final result can be computed ([1, 3]). But that's the matter of transducers...

3

Lazy evaluation solves this trivially. While we don't have that in JavaScript, we can emulate it by passing a function instead of a value:

const foldR = f => acc => xs => xs.length
  ? f(xs[0])(() => foldR(f)(acc)(xs.slice(1)))
  : acc //   ^^^^^ "lazy"

const map = f => foldR(x => acc => [f(x)].concat(acc()))([])
const every = f => foldR(x => acc => f(x) && acc())(true)
//                                        ^^^^^^^^ short-circuited - f(x) ? acc() : false

let xs = [1, 2, 3, 4];
console.log(map(x => x+1)(xs)); // [2, 3, 4, 5]
console.log(every(x => x%2==0)(xs)); // false

An alternative would be to use CPS, where you can easily jump to the end of the function:

const foldL = f => acc => xs => cont => xs.length
  ? f(acc)(xs[0])(res => foldL(f)(res)(xs.slice(1))(cont))
  : cont(acc);

const map = f => foldL(acc => x => cont => f(x)(res => cont(acc.concat([res]))))([]);
//const every = f => // xs => cont =>
//            foldL(acc => x => c => f(x)(res => c(acc && res)))(true) // (xs)(cont)
//                                   ^^^^ eager call
const every = f => xs => cont =>
              foldL(acc => x => c => acc ? f(x)(c) : cont(false))(true)(xs)(cont)
//                 call only when acc=true ^^^^      ^^^^^^^^^^^ jump out early otherwise
let xs = [1, 2, 3, 4];
let inc = x => cont => cont(x+1);
map(inc)(xs)(console.log.bind(console)); // [2, 3, 4, 5]

let even = x => cont => cont(x%2==0)
every(even)(xs)(console.log.bind(console)); // false
Bergi
  • 630,263
  • 148
  • 957
  • 1,375
  • `foldR` first builds recursively defined environments from left to right. This is confusing, because it seems to be rather `foldL`. The actual invocations of the closures then take place from right to left, as indicated by `foldR`. The schematic representation of the evaluation of `foldR` is as follows: `f(1)(f(2)(f(3)(f(4)(acc))))`. Would the closures be nested from right to left, one would obtain a reversed array. –  Feb 07 '16 at 12:00
  • Yes, in functional programming `foldr` and `foldl` differ by associativity, not by commutativity. JavaScript's `reduceRight` got it "all wrong" :-) – Bergi Feb 07 '16 at 12:21
  • Maybe I'm wrong, but `foldR` along with early exiting and reducing functions which aren't of type `[a] -> Bool` but e.g. `[a] -> [b]` causes problems, because early exiting always returns false and this value is then also reduced. This is particularly a problem with reducing functions of type `[a] -> [Bool]`, since regular bools can't be distinguished from those of potential early exits. –  Feb 08 '16 at 19:59
  • I don't understand what you mean by "*early exiting always returns false and this value is then also reduced*"? Also I'm not sure how a function with your example type `[a] -> [Bool]` would look like. – Bergi Feb 08 '16 at 20:54
  • Consider this poorly implemented yet exemplary function: `takeN = n => foldR(x => acc => [x].concat(--n && acc()))([])`. When `takeN` is short circuited (and thus the iteration of `foldR` "exits early"), it always returns zero (so it's not always bool as I've claimed) and zero is then reduced mistakenly `takeN(2)(xs) // [1,2,0]`. This applies to all take*/drop*-functions. Maybe there are no use cases for those functions in this context or it is complete nonsense anyway. So feel free to ignore my comment and I'll figure it out myself! –  Feb 08 '16 at 21:29
  • 1
    "Short-circuiting" refers exclusively to the `||`/`&&` operators for booleans. You can't use those for arrays of course. The reducer for `takeN` should be `x => acc => n-- ? [x].concat(acc()) : []` (apart from the fact that `takeN` [can't easily implemented with `foldR`](http://stackoverflow.com/q/15879940/1048572) purely) – Bergi Feb 08 '16 at 21:58
  • This happens when someone is completely fixated on his new toy (short circuiting) :D Thanks! –  Feb 08 '16 at 22:35