5

I'm learning the basics of concatenative languages, whose original idea is that function name concatenation is the same as function composition, instead of being function application as in Haskell.

Joy, Forth or Factor are postfix, which means stack based, but there are some prefix concatenative languages as well, such as Om.

I wonder if a Haskell variant could theoretically be a concatenative language just by swapping (or even equaling) the composition precedence (now 9) with the function application precedence (now 10).

If values in Haskell are just zero-argument functions, why is function application different than function composition?, is not function application the same as composing with a zero-argument function?.

Would it be possible in a simple way to make an interpreter or precompiler which transforms concatenative syntax to Haskell syntax by defining new composition and application operators with different precedence, and assuming that simple concatenation without parenthesis is composition?. I think that it is just a question of syntax, am I wrong?, and it would avoid many of the cases where we have to use parenthesis or $ operator in Haskell. Or is it a more fundamental problem, not just syntax and precedence?

Hint: suppose that every function and operator in Haskell is prefix, we can forget for this exercise about infix notation and all kinds of "syntactic sugar".

enrique
  • 492
  • 3
  • 11
  • 5
    Values in Haskell are not just constant functions; they are not functions at all, or, at best, if you like to abuse the way you talk about these things, are zero-argument functions (for which "constant" isn't a meaningful description -- constant over what, exactly?). – Daniel Wagner Dec 14 '14 at 01:59
  • 2
    See also [Concatenative, Row-polymorphic Programming in Haskell](https://github.com/leonidas/codeblog/blob/master/2012/2012-02-17-concatenative-haskell.md) and [the concatenative package](https://hackage.haskell.org/package/concatenative-1.0.1/docs/Control-Concatenative.html) which are, incidentally, Google's top two hits for "concatenative programming Haskell". – Daniel Wagner Dec 14 '14 at 02:01
  • @daniel-wagner, you are right, zero-argument functions are not constant functions, sorry. So function application can be seen as function composition when the second function has no arguments. This interpretation avoids using "values" and function application, everything are functions and composition. This is the concatenative way, and I'm trying to see if Haskell can be used in concatenative way. Yor first reference is a very good attempt to answer that question and implement a concatenative syntax in Haskell. Thank you. – enrique Dec 14 '14 at 14:36
  • Not all values are functions. One need not have a function to have recursion. – Daniel Wagner Dec 20 '14 at 11:33

2 Answers2

5

If values in Haskell are just constant functions, why is function application different than function composition?, is not function application the same as composing with a constant function?.

Values in Haskell are not “constant functions”. Nor are they “nullary functions”. The only things in Haskell that are functions are those whose type includes the function-arrow constructor ->.

A constant function is one which, given any input, returns the same output.

alwaysOne x = 1

map alwaysOne [1..5] == [1, 1, 1, 1, 1]

The derivative of such a function is 0. The const function constructs such functions conveniently, by ignoring its second argument and always returning the first.

map (const 1) [1..5] == [1, 1, 1, 1, 1]

The concept of a “nullary function” only makes sense in a language in which functions may take multiple arguments—in Haskell, definitions of functions with multiple arguments are syntactic sugar for definitions of chained functions with one argument, in a process known as currying. All these definitions are equivalent.

foo x y = x + y

foo x = \y -> x + y

foo = \x -> \y -> x + y

(In practice, for efficiency reasons, GHC’s runtime deals in multi-argument functions and only constructs closure objects for partially applied functions.)

I think that it is just a question of syntax, am I wrong?

The basic idea of concatenative languages is that programs denote functions, and concatenating two programs gives you a program that denotes the composition of those functions. So if f is a function, and g is a function, then f g is a program that denotes what we would write in Haskell as g . f. In abstract algebra terms, there is a homomorphism from the syntactic monoid onto the semantic monoid. That is the issue of syntax.

However, there is also an issue of semantics. These functions must manipulate the program state being implicitly passed between them, and the state of a real program is complex—so in practice, concatenative languages tend to use a tuple denoting a stack of values, because this is simple and efficient to implement in real hardware. In theory, the program state could be anything, such as a map or a set.

I think the semantics of Haskell are the real stumbling-block here, and while you can embed a concatenative DSL into Haskell, you need a concatenative language in order to make this usable for day-to-day programming.

Jon Purdy
  • 53,300
  • 8
  • 96
  • 166
  • Thank you @Jon (I read about your "Kitten" :) ). You say: "These functions must manipulate the program state being **implicitly** passed between them", but I can not deduce the implication for _state_ from this: "programs denote functions, and concatenating two programs gives you a program that denotes the composition of those functions". Is it not possible a concatenative syntax without (explicit) state?, why?. And, is programming with monads, or arrows, close to concatenative programming syntax? (functions take action/state from previous one and pass action/state to the next one). – enrique Jan 01 '15 at 01:16
  • 2
    @enrique: In a stack language, every function accepts a stack as input, and returns a stack as output; in practice, this stack manipulation is done in-place, so the stack denotes the state of the program. In order to run the function that the program represents, you need to give it some input, so the initial state is an empty stack. I suppose that in an impure concatenative language, “push” corresponds to `return`, and composition to monadic composition (`>=>`) in `IO`. – Jon Purdy Jan 02 '15 at 21:15
1

The best answer for my question is the article referenced by @Daniel Wagner on his second comment, "Concatenative, Row-polymorphic Programming in Haskell", which was written by Sami Hangaslammi as an answer to another good article by @Jon Purdy, "Why Concatenative Programming Matters".

It shows "one way to implement a concatenative DSL inside Haskell", which is what I really wanted to know if it was possible, and how.

enrique
  • 492
  • 3
  • 11