3

It occurs to me that I could use use implicit conversions to both announce and enforce preconditions. Consider this:

object NonNegativeDouble {
  implicit def int2nnd(d : Double) : NonNegativeDouble = new NonNegativeDouble(d) 
  implicit def nnd2int(d : NonNegativeDouble) : Double = d.v
  def sqrt(n : NonNegativeDouble) : NonNegativeDouble = scala.math.sqrt(n)
}

class NonNegativeDouble(val v : Double ) {
  if (v < 0) {
    throw new IllegalArgumentException("negative value")
  }
}

object Test {
  def t1 = {
    val d : Double = NonNegativeDouble.sqrt(3.0);
    printf("%f\n", d);
    val n : Double = NonNegativeDouble.sqrt(-3.0);
  }
}

Ignore for the moment the actual vacuity of the example: my point is, the subclass NonNegativeDouble expresses the notion that a function only takes a subset of the entire range of the class's values.

First is this:

  1. A good idea,
  2. a bad idea, or
  3. an obvious idea everybody else already knows about

Second, this would be most useful with basic types, like Int and String. Those classes are final, of course, so is there a good way to not only use the restricted type in functions (that's what the second implicit is for) but also delegate to all methods on the underlying value (short of hand-implementing every delegation)?

Michael Lorton
  • 43,060
  • 26
  • 103
  • 144
  • I found a way how to do that in csharp. This is my prototype https://gist.github.com/1306491 – Hodza Oct 23 '11 at 09:51

3 Answers3

9

This is an extremely cool idea, but unfortunately its true potential can't be realized in Scala's type system. What you really want here is dependent types, which allow you to impose a proof obligation on the caller of your method to verify that the argument is in range, such that the method can't even be invoked with an invalid argument.

But without dependent types and the ability to verify specifications at compile-time, I think this has questionable value, even leaving aside performance considerations. Consider, how is it any better than using the require function to state the initial conditions required by your method, like so:

def foo(i:Int) = {
    require (i >= 0)
    i * 9 + 4
}

In both cases, a negative value will cause an exception to be thrown at runtime, either in the require function or when constructing your NonNegativeDouble. Both techniques state the contract of the method clearly, but I would argue that there is a large overhead in building all these specialized types whose only purpose is to encapsulate a particular expression to be asserted at runtime. For instance, what if you wanted to enforce a slightly different precondition; say, that i > 45? Will you build an IntGreaterThan45 type just for that method?

The only argument I can see for building e.g. a NonNegativeFoo type is if you have many methods which consume and return positive numbers only. Even then, I think the payoff is dubious.

Incidentally, this is similar to the question How far to go with a strongly typed language?, to which I gave a similar answer.

Community
  • 1
  • 1
Tom Crockett
  • 30,818
  • 8
  • 72
  • 90
  • Without addressing any of your *real* questions, let me answer your easy ones. First, why do this instead of `require`? I will ignore the underlying reality that I had never heard of `require` and mention (a) this way is more certain and more terse and (b) this way communicates with the user of the function, rather than assuming he read the manual, which he didn't. Second, I would `like` to be able to express range restrictions as `NumberGreaterThan[45]`, but I don't think I can. Hey, from that perspective C++ is better than Scala! We should write Bjorne Stroustrop and tell him. – Michael Lorton Jan 08 '11 at 15:02
0

Quite a neat idea actually, though I wouldn't use it in any performance sensitive loops.

@specialisation could also help out by a fair amount here to help make the code more efficient...

Kevin Wright
  • 49,540
  • 9
  • 105
  • 155
0

This would usually be called "unsigned int" in C. I don't think it's very useful, because you wouldn't be able to define operators properly. Consider this:

val a = UnsignedInt(5)
val b = a - 3 // now, b should be an UnsignedInt(2)
val c = b - 3 // now, c must be an Int, because it's negative!

Therefore, how would you define the minus operator? Like this maybe:

def -(i:Int):Either[UnsignedInt,Int]

That would make arithmetics with UnsignedInt practically unusable.

Or you define a superclass, MaybeSignedInt, that has two subclasses, SignedInt and UnsignedInt. Then you could define subtraction in UnsignedInt like this:

def -(i:Int):MaybeSignedInt

Seems totally awful, doesn't it? Actually, the sign of the number should not conceptually be a property of the number's type, but of it's value.

Madoc
  • 5,841
  • 4
  • 25
  • 38
  • The C conception of `unsigned int` is *not* a restricted range of `int`. 2,147,483,649, for example, is an unsigned int (in 32-bit machines), but it is not an int. Arithmetic operations are not closed over subsets, so in my system, minus even on two NonNegativeInts returns an Int. Indeed the only ways they can be treated as closed on regular numeric types in any computer language is by expanding the type (to include things like `NaN`) *and* accepting what can only be described as wrong answers (like overflow, underflow, and loss of precision). – Michael Lorton Jan 08 '11 at 15:11
  • You're paying a high price for enforcing a contract then. I wouldn't generally recommend this, unless you have some further benefits from it. – Madoc Jan 10 '11 at 10:49