4

I come from functional languages (e.g. Haskell) and I enjoy a lot on typeclasses to achieve polymorphism which is also a structural approach to implement ad-hoc overloading.

However, recently I'm starting to understand OOP's way to model real problems and I'm curious why do we need dynamic polymorphism in OOP languages (such as Java). In my experience, most of function call can be resolved during compile time as many functional languages do not support subtyping.

So my problem is that, In what kind of situation do we have to use dynamic polymorphism instead of compile-time polymorphism? My guesses are:

  • When we use the subtype system where we have objects we cannot decide its actual type (e.g. we have a container containing many objects of various types. However, in this situation, why not try Algebraic data type or union type to model the container's element type?).
  • We only have the object and we do not know its methods' real name, so we have to use the vptr table to help us.
Ireina
  • 366
  • 2
  • 9
  • Does this answer your question https://stackoverflow.com/questions/20783266/what-is-the-difference-between-dynamic-and-static-polymorphism-in-java ? – nehacharya Mar 09 '21 at 08:16
  • @salazarin I think I basically understand the differences between static and dynamic polymorphism, but I still do not understand when is dynamic polymorphism necessary. – Ireina Mar 09 '21 at 08:20
  • "Why not use ADT or union types" - maybe a language doesn't have them. – mkrieger1 Mar 09 '21 at 08:22
  • @mkrieger1 Yes I know many OOP languages do not fully support ADT or union/intersection types. So does that mean we can safely avoid dynamic dispatching if we have them? – Ireina Mar 09 '21 at 08:25
  • I think you need to start with the concepts of composition (has-a) and inheritance (is-a) relationships among types. From there, if you consider that an is-a relationship can apply to a static type (the parent) and a dynamic type (the child, as in when the object is instantiated dynamically), then you begin to see that not all dynamic behavior can be statically programmed in source. This is where dynamic polymorphism adds flexibility to objects compiled separately but following the same contract. – ernest_k Mar 09 '21 at 08:29
  • @ernest_k Thank you for your suggestions. I think you prefer OOP's way to model problems(such as 'is-a' relation). However I've programed a lot with Haskell without this modeling approach... So does this mean that dynamic polymorphism is only useful under OOP's framework even though we may avoid them? Is there any situations where we have to use dynamic dispatching (also have to use 'is-a' model)? – Ireina Mar 09 '21 at 08:34
  • Does https://stackoverflow.com/questions/13106683/dynamic-dispatch-in-haskell answers your question? – Jean-Baptiste Yunès Mar 09 '21 at 08:37
  • @Jean-BaptisteYunès Thank you! This question has something similar with mine. Answers of this question discussed a lot on how to implement 'dynamic-dispatch-like' way of OOP(such as ADT or simply first-class function) but these are still not OOP's way of dynamic polymorphism... I still want some specific examples where dynamic polymorphism is necessary and cannot be replaced – Ireina Mar 09 '21 at 08:44
  • 1
    But if you have a framework, thus not knowing the class that will be used, the only way is to use a dynamic dispatch, no? You must also know that "dynamic" dispatch is a semantic explanation, the trick is that a single indirection is sufficient (c++ vptr) to implement it (meaning that there is no complex "function searching" algorithm). The cost is constant. – Jean-Baptiste Yunès Mar 09 '21 at 08:49
  • @Jean-BaptisteYunès What do you mean by 'not knowing the class that will be used'? Does that mean that we only know the object's methods' names but we have no idea of its real type(or we only know its interface)? I think this is weird. Does this situation only appears in OOP's world? I understand vptr is sufficient, however, static polymorphism actually has no overhead compared with dynamic dispatching. – Ireina Mar 09 '21 at 08:54
  • @Ireina "we only know the object's methods' names but we have no idea of its real type". Yes that's it! Very common in OOP that specifically focus on this kind of property... – Jean-Baptiste Yunès Mar 09 '21 at 09:00
  • 1
    Let’s take a very simple example, a user interface. All implementations I know of have a tree structure, its nodes having a certain compile-time base type but actual sub types determining their appearance and behavior. I have no idea how an implementation of a real life UI not using this kind of polymorphism would look like. – Holger Mar 09 '21 at 09:05
  • @Holger Thank you for your example. However, you can still use ADT to implement this without dynamic overhead. The only difference I see is that OOP's way can extend node types easily. – Ireina Mar 09 '21 at 09:09
  • 2
    I doubt that a recursively defined algebraic data type for an arbitrarily large and deep tree will perform better than a plain tree with dynamic dispatch. That depends a lot on the compiler’s ability to transform this definition into something entirely different under the hood. The result wouldn’t be so different to what the OOP code produces. – Holger Mar 09 '21 at 09:51
  • Thank you, I agree with you. Now I consider OOP's way may have a better scalability under normal OO framework and this topic may be a little complicated. – Ireina Mar 09 '21 at 12:12

2 Answers2

4

In Haskell, we replace "dynamic polymorphism" with higher-order functions.

Consider the following problem: we want to define a type which denotes a predicate. We will eventually use this type Predicate when we implement our Lists so that we can define the filter function. We would like to be able to easily define the equality predicate, the less-than predicate, and be able to combine two predicates by joining them with "and".

A reasonable Java attempt would look like this.

interface Predicate<T> {
    public abstract boolean predicate(T x);
}

class EqualsPredicate<T> implements Predicate<T> {
    private final T value;

    public EqualsPredicate(T value) {
        this.value = value;
    }

    @Override
    public boolean predicate(T x) {
        return this.value.equals(x);
    }
}

class LessPredicate<T implements Comparable<T>> implements Predicate<T>{
    private final T value;

    public LessPredicate(T value) {
        this.value = value;
    }

    @Override
    public boolean predicate(T x) {
        return this.value.compareTo(x) < 0;
    }
}

class AndPredicate<T> implements Predicate<T> {
    private final Predicate<T> p1;
    private final Predicate<T> p2;

    public AndPredicate(Predicate<T> p1, Predicate<T> p2) {
        this.p1 = p1;
        this.p2 = p2;
    }

    @Override
    public boolean predicate(T x) {
        return p1.predicate(x) && p2.predicate(x);
    }
}

In Haskell, the answer to this conundrum is obvious. We just define

type Predicate t = t -> Bool

makeEqualsPredicate :: Eq t => t -> Predicate t
makeEqualsPredicate = (==)

makeLessPredicate :: Ord t => t -> Predicate t
makeLessPredicate = (<)

makeAndPredicate :: Predicate t -> Predicate t -> Predicate t
makeAndPredicate p1 p2 x = p1 x && p2 x
-- or, even more succinctly, makeAndPredicate = liftA2 (&&)

Java allows "dynamic dispatch" of methods through subclassing. Haskell allows "dynamic dispatch" of functions through higher-order functions.

But wait, you say. Predicate was an interface with only one method. What should we do if we want to have two methods?

Well, if an interface with one method corresponds to a function, an interface with two methods must correspond to a pair of functions. This is just the OOP principle known as "composition over inheritance".

So we can always replace Java-style dynamic polymorphism with Haskell-style higher-order functions.

In fact, you actually see this observation in modern Java as well. As of Java 8, you can add the annotation @FunctionalInterface to an interface with one method, which permits you to create instances of that interface using lambdas. So you could write in Java 8

@FunctionalInterface
interface Predicate<T> {
    public abstract boolean predicate(T x);

    public static Predicate<J> makeEqualsPredicate(J t) {
        return (x) -> t.equals(x);
    }

    public static Predicate<J implements Comparable<J>> makeLessPredicate(J t) {
        return (x) -> t.compareTo(x) < 0;
    }

    public Predicate<T> makeAndPredicate(Predicate<T> other) {
        return (x) -> this.predicate(x) && other.predicate(x);
    }
}
Mark Saving
  • 1,752
  • 7
  • 11
  • Thank you very much! Higher-order functions are awesome! However, with polymorphism, I'd like to use the same function name with various kinds of parameters(of different types). In Haskell, the only way I see is to through typeclasses(via static overloading) or just generics(limited) which is quite different compared with OOP's dynamic dispatching. Therefore In haskell, we may simulate 'OOP-like' dynamic polymorphism. However, Is there any cases I have to use this instead of simple static overloading? – Ireina Mar 10 '21 at 02:46
1

With many people's help, currently I've got some of answers I want after reflecting lots of designs. Since Rust has both nice support for static and dynamic polymorphism, I shall use Rust in this answer to demonstrate my points.

I now have 2 points for dynamic dispatch: user-friendly scalability and smaller compiled size .

Point 1: user-friendly scalability

Many people argue that dynamic dispatch is suitable for a situation where you have a container to collect various kinds of objects(of course, different types). For example:

trait MyTrait {
  fn method(&self, ...) -> ReturnType;
}

type MyContainer = Vec<Box<MyTrait>>;

fn main() {
  ...
  //the_container : MyContainer
  the_container.iter().map(... { x.method(...) } ...) //only demo code
}

In code above, on compile time, we only know that the elements are trait objects, which means the program shall use a vptr-like strategy to find which method to use during executing the expression in the main function.

However, there's another way to implement nearly the same thing:

enum MyContainerTypes {
  Type0 A,
  Type1 B,
  ...
}

impl MyTrait for MyContainerType {
  fn method(&self, ...) -> ReturnType {
    match self {
      Type0 a => {...},
      Type1 b => {...},
      ...
    }
  }
}

type MyContainer = Vec<MyContainerType>;

fn main() {
  ...
  //my_container : MyContainer
  my_container.iter().map(... { x.method(...) } ...); //demo
}

In this way, no dynamic polymorphism is required, however, consider the following situation: You are a user of a library which has been designed and you have no access to change definitions like enums inside the library. Now you want to make your own type of ContainerType and you want to reuse codes of existed logic. If you are using dynamic dispatch, the work is simple: just make another impl of your custom container type and everything's fine. Unfortunately, if you are using the static version of the library, it may become a little hard to achieve this goal...

Point 2: Smaller compiled size

Languages like Rust may have to compile a generic function many times, once for each type it’s used with. This could make the binary large, a phenomenon called code bloat in C++ circles. Let's consider a simpler case:

trait MyTrait {
  fn method(&self, ...) -> ReturnType;
}

fn f(x: MyTrait) { ... } //MyTrait is unsized, this is unvalid rust code
fn g<T: MyTrait>(x: T) { ... }

If you have lots of functions like function g, the compiled size may become larger and larger. However this should not be a big issue since most of us have the luxury of ignoring code size for plentiful memory nowadays.

Conclusion

In short, although static polymorphism has many advantages over dynamic polymorphism, there're still some corners dynamic dispatch can work better. Personally I really love Haskell-like's way to treat polymorphism(that's also why I like Rust). I don't think this can be the final best and complete answer, discussions are welcome!

Combining strategies

It suddenly occurred to me that why not combine the static and dynamic strategies? To allow users to further extend our model, we can just leave a small hole for users to fill in later, like:

trait Custom {
  fn method(&self) -> ReturnType;
}

enum Object {
  A(Type0),
  B(Type1),
  ...
  Hole(Box<dyn Custom>)
}

However, in this way, some operations like clone may be a little hard to implement, but I think this is still an interesting idea.

Update

Haskell's existential type also has similar function and implementation as dynamic polymorphism in OOP languages:

data Obj = forall a. (Show a) => Obj a

xs :: [Obj]
xs = [Obj 1, Obj "foo", Obj 'c']

doShow :: [Obj] -> String
doShow [] = ""
doShow ((Obj x):xs) = show x ++ doShow xs

I also found that this existential type can be used to hide some details of types and provide cleaner interface for users to use.

Edit

Thanks @MarkSaving. There's a mistake in Point 2's code, the dyn trait object is unsized and therefore should be changed to a reference or boxed dyn:

fn f(x: Box<dyn MyTrait>) { ... }
Ireina
  • 366
  • 2
  • 9
  • 1
    In Rust, you can't just do `fn f(x: MyTrait) { ... }` because Trait Objects are unsized. You must instead do `fn f(x: Box) {...}`. You can replicate Rust's dyn trait model in Haskell using existential types. – Mark Saving Mar 11 '21 at 21:30
  • 1
    @MarkSaving Thank you! this is my mistake, I shall make a note on this and I will use a ref or boxed dyn instead. – Ireina Mar 12 '21 at 01:49