3

We can see that we can use reduce/foldl1 as the function by which we can define other higher order functions such as map, filter and reverse.

(defn mapl [f coll]
  (reduce (fn [r x] (conj r (f x)))
          [] coll))

(defn filterl [pred coll]
  (reduce (fn [r x] (if (pred x) (conj r x) r))
          [] coll))

(defn mapcatl [f coll]
  (reduce (fn [r x] (reduce conj r (f x)))
          [] coll))

We also appear to be able to do this in terms of foldr. Here is map and filter in terms of foldr from Rich Hickey's Transducers talk at 17:25.

(defn mapr [f coll]
  (foldr (fn [x r] (cons (f x) r))
         () coll))

(defn filterr [pred coll]
  (foldr (fn [x r] (if (pred x) (cons x r) r))
         () coll))

Now we can define map, foldl (reduce) and foldr in terms of first, rest and cons (car, cdr and cons):

(defn foldr [f z xs]
   (if (null? xs)
       z
       (f (first xs) (foldr f z (rest xs)))))

(defn foldl [f z xs]
   (if (null? xs)
       z
       (foldl f (f z (first xs)) (rest xs))))

(defn map [f lst]
   (if (null? lst)
       '()
       (cons (f (first lst)) (map f (rest lst)))))

My question is Why is fold and reduce considered fundamental - surely everything is defined in terms of cons, cdr and car? Isn't this looking at things at the wrong level?

Community
  • 1
  • 1
hawkeye
  • 34,745
  • 30
  • 150
  • 304
  • 2
    [Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire](http://wwwhome.ewi.utwente.nl/~fokkinga/mmf91m.pdf) seems relevant. I also wonder whether there's more than one "fundamental" way of looking at things -- there's the way Clojure's implementation works, yes, but there's also a mathematical way of describing things. – Matt Fenwick Sep 29 '14 at 12:06
  • I really appreciate that comment Matt - I'm just wondering if there is a couple of 'building block starting places' that we can categorise. Some say that Lisp is the Maxwell's equations of software http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/ - but the theme in this Q&A has been that this is limited to 'lists' and sometimes you want to use `fold` to transform non-list structures. – hawkeye Sep 29 '14 at 22:20
  • Note that `cons`, `car`, and `cdr` aren't primitive either in an objective sense: you can define them all in terms of just `lambda`. So why ask to reduce everything to these three specific primitive operations, instead of some other ones? It's all relative. – amalloy Sep 29 '14 at 23:56
  • Those primitives are significant because they're the 'bare minimum' we need to build a Lisp Machine. http://stackoverflow.com/questions/3482389/how-many-primitives-does-it-take-to-build-a-lisp-machine-ten-seven-or-five (Now perhaps this is more sentiment than science - but it is the context to the question). – hawkeye Sep 30 '14 at 01:32

3 Answers3

6

I thought Rich Hickey explained exactly that in his talk about transducers.

He sees folds as a fundamental concept of transformation. It doesn't need to know what structure it is working on and how to operate on that structure.

You just defined fold in terms of itself, cdr, car and rest. What Rich is arguing for is that fold in itself is an abstract concept separate from the data structure that it operates on and, as long as we provide it certain functions that actually operate on the data structure, it will work as expected.

So in the end it's all about separation of concerns and reusability.

soulcheck
  • 36,297
  • 6
  • 91
  • 90
0

Folds are a unifying framework for data processing in principled way, because they correspond to inductive data definitions. They are not ad-hoc. cons/car/cdr are building blocks for creating data (and code) but in unprincipled way (we can do anything, and process it ad-hoc). Folds are higher level, more disciplined, more predictable, easier to reason about.

Given the ad-hoc implementations for map, filter, mapcat for lists, we can see there's something similar in them - the structure of code that follows the structure of data (list, built of cons nodes, to which correspond the two arguments of the combining function). And that's fold.

In that talk, Rich Hickey abstracts away the step function. But he doesn't abstract over the data. Any combining function used to fold over labeled trees has to take three parameters, not two. So what his -ing functions do is, still, folding on lists, conceptually; it's just that they abstract over the concrete implementations of lists - either as cons cells, hashed array trees, whatever.

Will Ness
  • 70,110
  • 9
  • 98
  • 181
  • Yeah - it's quite a shock to the Lispers (like me) when 'not everything is a list' - and you need a HOF that can transform non-list data structures. Really appreciate this answer. – hawkeye Sep 30 '14 at 22:58
  • you need a HOF to transform list data too. (I meant "higher level" in a *state-of-mind* kind of way...) folds are just conceptually in direct correspondence with the data type. [Church encoding](https://en.wikipedia.org/wiki/Church_encoding#Represent_the_list_using_right_fold) is a fold. BTW it's a very interesting achievement, this separation, and with such a simple, basic tool as *abstraction*. Abelson and Sussman can be proud. :) It is a well known problem, and in Haskell e.g. all data types are concrete. – Will Ness Sep 30 '14 at 23:11
0

Most operators in a lispy language will have a partner that you can use to define it or vice versa. This attribute is called "metacircularity". There is no one specific set of basic operators that is the fundamental minimum to allow the full set of programmability given by the language.

You can read more at this paper: http://home.pipeline.com/~hbaker1/MetaCircular.html

  • Fascinating claim. This question would disagree with that assumption http://stackoverflow.com/questions/3482389/how-many-primitives-does-it-take-to-build-a-lisp-machine-ten-seven-or-five The paper you linked mentioned let, catch and labels. I don't think anyone ever thought those were fundamental anyway. Perhaps you could provide additional references for your claim? – hawkeye Oct 01 '14 at 22:36
  • Well, there is communication on comp.lang.lisp from the editor of the ANSI standard discussing the debate as to which set of operators should be defined as the base operators (for the purposes of `macroexpand` and code-walkers. Not sure how to find it at this point, but you can look at the list of "special operators" in the ANSI Common Lisp standard if you want proof that a decision was made. The question you refer to does not discuss what is fundamental, but rather what was included in the original Lisp. – Rahul Jain Oct 21 '14 at 01:07