With many people's help, currently I've got some of answers I want after reflecting lots of designs. Since Rust has both nice support for static and dynamic polymorphism, I shall use Rust in this answer to demonstrate my points.
I now have 2 points for dynamic dispatch: user-friendly scalability and smaller compiled size .
Point 1: user-friendly scalability
Many people argue that dynamic dispatch is suitable for a situation where you have a container to collect various kinds of objects(of course, different types). For example:
trait MyTrait {
fn method(&self, ...) -> ReturnType;
}
type MyContainer = Vec<Box<MyTrait>>;
fn main() {
...
//the_container : MyContainer
the_container.iter().map(... { x.method(...) } ...) //only demo code
}
In code above, on compile time, we only know that the elements are trait objects, which means the program shall use a vptr-like strategy to find which method to use during executing the expression in the main
function.
However, there's another way to implement nearly the same thing:
enum MyContainerTypes {
Type0 A,
Type1 B,
...
}
impl MyTrait for MyContainerType {
fn method(&self, ...) -> ReturnType {
match self {
Type0 a => {...},
Type1 b => {...},
...
}
}
}
type MyContainer = Vec<MyContainerType>;
fn main() {
...
//my_container : MyContainer
my_container.iter().map(... { x.method(...) } ...); //demo
}
In this way, no dynamic polymorphism is required, however, consider the following situation: You are a user of a library which has been designed and you have no access to change definitions like enums
inside the library. Now you want to make your own type of ContainerType
and you want to reuse codes of existed logic. If you are using dynamic dispatch, the work is simple: just make another impl
of your custom container type and everything's fine. Unfortunately, if you are using the static version of the library, it may become a little hard to achieve this goal...
Point 2: Smaller compiled size
Languages like Rust may have to compile a generic function many times, once for each type it’s used with. This could make the binary large, a phenomenon called code bloat in C++ circles.
Let's consider a simpler case:
trait MyTrait {
fn method(&self, ...) -> ReturnType;
}
fn f(x: MyTrait) { ... } //MyTrait is unsized, this is unvalid rust code
fn g<T: MyTrait>(x: T) { ... }
If you have lots of functions like function g
, the compiled size may become larger and larger. However this should not be a big issue since most of us have the luxury of ignoring code size for plentiful memory nowadays.
Conclusion
In short, although static polymorphism has many advantages over dynamic polymorphism, there're still some corners dynamic dispatch can work better. Personally I really love Haskell-like's way to treat polymorphism(that's also why I like Rust). I don't think this can be the final best and complete answer, discussions are welcome!
Combining strategies
It suddenly occurred to me that why not combine the static and dynamic strategies? To allow users to further extend our model, we can just leave a small hole for users to fill in later, like:
trait Custom {
fn method(&self) -> ReturnType;
}
enum Object {
A(Type0),
B(Type1),
...
Hole(Box<dyn Custom>)
}
However, in this way, some operations like clone
may be a little hard to implement, but I think this is still an interesting idea.
Update
Haskell's existential type also has similar function and implementation as dynamic polymorphism in OOP languages:
data Obj = forall a. (Show a) => Obj a
xs :: [Obj]
xs = [Obj 1, Obj "foo", Obj 'c']
doShow :: [Obj] -> String
doShow [] = ""
doShow ((Obj x):xs) = show x ++ doShow xs
I also found that this existential type can be used to hide some details of types and provide cleaner interface for users to use.
Edit
Thanks @MarkSaving. There's a mistake in Point 2's code, the dyn trait object is unsized and therefore should be changed to a reference or boxed dyn:
fn f(x: Box<dyn MyTrait>) { ... }