38

Pure untyped lambda calculus is a powerful concept. However, building a machine or interpreter for real-world use is often described as (close to) impossible. I want to investigate this. Is it theoretically possible to build a comparatively fast untyped lambda calculus machine?

By comparatively fast I generally mean comparable to modern Turing-like architectures for a similar range of tasks, within a similar amount of resources (gates, operations, physical space, power use, etc).

I place no limitations on the implementation and architectural layers of the machine, except that it must be physically and somewhat realistically realizeable in some way. No restrictions on how to handle IO either.

  • If possible, what are the main challenges?
  • If impossible, why and how?
  • What is the state of research in this area?
  • Which fields and subjects are most relevant?

How much is known about the feasibility of a computer architecture based around lambda calculus?

Questions covering similar ground:

Community
  • 1
  • 1
Jostein
  • 3,124
  • 1
  • 22
  • 27
  • 6
    +1 for an interesting question, though you might get a better answer at cstheory.stackexchange.com – Gabe Moothart May 18 '11 at 15:11
  • Hm, I didn't think of that. I'll add links to the related questions on cstheory above for now. Moderators feel free to move the question if that's possible. – Jostein May 18 '11 at 15:31

1 Answers1

22

First, it is possible to compile the lambda calculus efficiently to machine code even on existing architectures. After all, scheme is the lambda calculus plus a bit extra, and it can be compiled efficiently. However, scheme & co are the lambda calculus under strict evaluation. It is also possible to compile the lambda calculus under non-strict evaluation efficiently! On this, see SPJ's two books for some background: http://research.microsoft.com/en-us/um/people/simonpj/papers/papers.html

On the other hand, it is also true that if we built hardware designed for functional languages, we could compile code to that hardware and do very well indeed. The best new stuff on this I know of is the Reduceron: http://www.cs.york.ac.uk/fp/reduceron/

The key to the performance of the Reduceron, which is quite compelling, is that it is built around parallel graph reduction, and aims to exploit the opportunities for parallelism made explicit in the reduction of lambda calculus equations.

sclv
  • 38,665
  • 7
  • 99
  • 204
  • Exciting stuff! :) Thanks a bunch! – Jostein May 18 '11 at 19:17
  • 2
    Its not *pure* lambda calculus. – cdiggins Sep 23 '11 at 00:31
  • @cdiggins -- are you saying that the techniques described in the two links provided cannot be used to compile pure lambda calculus!? Recall that the pure (typed) lambda calculus is a subset of Haskell... – sclv Sep 23 '11 at 00:38
  • I'm just pointing out that using constants instead of the church numerals is not "pure lambda calculus". If you used church numerals (or whatever encoding) then doing real math is going to be slow. – cdiggins Sep 25 '11 at 20:01
  • 3
    To the extent that church numerals are slower than machine ints, that's an issue of algorithmic complexity, not of implementation. Furthermore, one can compile the former to the latter. – sclv Sep 25 '11 at 21:47
  • Awesome, I wish I could save this answer on my SO account – MaiaVictor Jul 03 '13 at 09:54