-1

In Eiffel one is allowed to use an expanded class which doesn't allocate from the heap. From a developer's perspective one rarely has to think about conversion from Int to Float as it is automatic. My question is this: Why did Haskell not choose a similar approach to modeling Num. Specifically, lets consider the Int instance. Here is the rationale for my question:

       [1..3] = [1,2,3]
       [1..3.5] = [1.0,2.0,3.0,4.0] -- rounds up

The second list was something that I was not expecting because there are by definition infinite floating point numbers between any two integers. Of course once we test the sequence it is clear that it is returning the floor of the floating point number rounded up. One of the reasons these conversions are needed is allow us to compute mean of a set of Integers for example.

In Eiffel the number type hierarchy is a bit more programmer friendly and the conversion happens as needed: for example creating a sequence can still be a set of Ints that result in a floating point mean. This has a readability advantage.

Is there a reason that expanded class was not implemented in Haskell?Any references will greatly help.

@ony: the point about parallel strategies: wont we face the same issue when using primitives? The manual does discourage using primitives and that makes sense to me in general where ever we can use primitives we probably need to use the abstract type. The issue I faced when trying to a mean of numbers is the missing Fractional Int instance and as to why does 5/3 not promote to a floating point instead of having to create floating point array to achieve the same result. There must be a reason as to why Fractional instance of Int and Integer is not defined? That could help me understand the rationale better.

@leftroundabout: the question is not about expanded classes per se but the convenience that such a feature can offer although that feature alone is not sufficient to handle the type promotion to float from an int for example as mentioned in my response to @ony. Lets take the classic example of a mean and try to define it as

> [Int] :: Double
  let mean xs = sum xs / length xs (--not valid haskell code)
  [Int] :: Double
  let mean = sum xs / fromIntegral (length xs)

I would have liked it if I did not have to call the fromIntegral to get the mean function to work and that ties to the missing Fractional Int. Although the explanation seems to make sense, it has to, what I dont understand is if I am clear that I expect a double and I state it in my type signature is that not sufficient to do the appropriate conversion?

dganti
  • 305
  • 2
  • 11
  • 5
    Is this really a question about expanded classes? I doubt it. Can you relate it to some more widespread programming language, e.g. C of course does automatic type conversion as well (albeit very unsafely). And I don't think you've quite grasped how Haskell's type classes work – in your example, nothing is _rounded_; try the list `[0.5 .. 4]`. – leftaroundabout Jan 17 '14 at 13:30
  • 2
    The fact that the latter sequence ends in 4.0 is a bug in the Haskell definition of `enumFromTo` for floating point numbers. It used to be correct, but was changed in a misguided attempt to make it more "user friendly". – augustss Jan 17 '14 at 13:56
  • 1
    The point @augustss is making is relevant, but [whether this is "a bug" is disputable](http://stackoverflow.com/questions/7290438/haskell-ranges-and-floats/7296160#7296160). – leftaroundabout Jan 17 '14 at 14:09
  • 1
    You can of course have a different opinion if it's a bug or not. Floating point is tricky. There is no way to paper over the weirdness that it has, because it will just pop up in a different place. So I think it's better to use the obvious definitions of functions rather than trying to be "cute". I consider it a bug that the sequence `[a,b .. c]` can include an element larger than `c`. If it contains `c` or not will depend on `a` and `b`. – augustss Jan 17 '14 at 15:04
  • @augustss - I tried [0.5 .. 4] and it gives [0.5, 1.5, 2.5, 3.5, 4.5], it doesn't end in 4.0 (GHC 7.6.3). – Alfonso Villén Jan 17 '14 at 15:12
  • I wouldn't expect it to end in 4.0. You can't get from 0.5 to 4.0 by adding 1.0. – augustss Jan 17 '14 at 15:38
  • You're not the first to not like this. See [Haskell ranges and floats](http://stackoverflow.com/questions/7290438/haskell-ranges-and-floats) – not my job Jan 17 '14 at 16:42
  • @augustss, `[0,2 .. 5]` results in `[0,2,4]` while you also can't get from `0` to `5` by adding `2`. It actually rounds to the nearest step `[1,1.5 .. 2.3]` will result in `[1.0,1.5,2.0,2.5]` whiile `.. 2.2` will result ti `2.0`. – ony Jan 17 '14 at 21:21
  • @leftaroundabout I don't see how anyone can argue that this is right `[1,3 .. 6]::[Integer] = [1,3,5]` and `[1,3 .. 6] :: [Double] == [1,3,5,7]`. – augustss Jan 17 '14 at 23:34
  • @augustss: you did read my answer to the other question, right? My point is, _nothing_ in floating-point arithmetic is ever "right" in the sense you'd use for integers, ADTs and pretty much all the rest of computer science. In particular, equality is not a useful concept ([_abstract stone duality_](http://www.paultaylor.eu/ASD/) delves into the mathematics of the issue). — OTOH, being inside some range _has_ as valid meaning for floats, so, yeah: `[1,3..6] == [1,3,5,7]` **is** horrible. `Double` shouldn't be `Enum` at all, something like `[1 ..[4].. 6] = [1,2.̅6,4.̅3,6]` would be better. – leftaroundabout Jan 18 '14 at 00:56
  • Yes, I did read your other answer. It did not convince me. :) One cannot use FP as counters (unless you know all values can be represented exactly), doing so should be punished. :) – augustss Jan 18 '14 at 09:40
  • Could not agree with you more @augustss :). – dganti Jan 19 '14 at 03:10
  • @user2976249, I'd say that having different functions `div :: Integral a => a -> a -> a` and `(/) :: Fractional a => a -> a -> a` related with that those are different operations usually. `(div a b) * b <= a` while `(a / b) * b ≈ a`. `(/)` cannot be used with integers since they cannot fulfil that rule. Alternative is `[1 :: Rational, 1.5 .. 29/10] = [1 % 1,3 % 2,2 % 1,5 % 2,3 % 1]`. Presence of approximation always gives non-exact comparison. – ony Jan 19 '14 at 07:01

2 Answers2

3

[a..b] is shorthand for enumFromTo a b, a method of the Enum typeclass. It begins at a and succs until the first time b is exceeded.

[a,b..c] is shorthand for enumFromThenTo a b c is similar to enumFromTo except instead of succing it adds the difference b-a each time. By default this difference is computed by roundtripping through Int so fractional differences may or may not be respected. That said, Double works as you'd expect

Prelude> [0.0, 0.5.. 10]
[0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,4.5,5.0,5.5,6.0,6.5,7.0,7.5,8.0,8.5,9.0,9.5,10.0]

[a..] is shorthand for enumFrom a which just succs forever.

[a,b..] is shorthand for enumFromThen a b which just adds (b-a) forever.

J. Abrahamson
  • 72,246
  • 9
  • 135
  • 180
1

As for behaviour @J.Abrahamson already replied. That's definition enumFromThenTo.

As for design... Actually GHC have Float# that represents unboxed type (can be allocated anywhere, but value is strict). Since Haskell is a lazy language it assumes that most of the values are not required initially, until they actually referred with a primitive with a strict arguments. Consider length [2..10]. In this case without optimization Haskell may even avoid generation of numbers and simply build up a list (without values). Probably more useful example takeWhile (<100) [x*(x-1) | x <- [2..]].

But you shouldn't think that we have overhead here since you are writing in language that abstracts away all that stuff with thumbs (except of strict notation). Haskell compiler have to take this as a work for itself. I.e. when compiler will be able to tell that all elements of list will be referenced (transformed to normal form) and it decides to process it within one stack of returns it can allocate it on stack.

Also with such approach you can gain more out of your code by using multiple CPU cores. Imagine that using Strategies your list being processed on a different cores and thus they should share common data on heap (not on stack).

ony
  • 12,457
  • 1
  • 33
  • 41