The key piece here is to understand the cause of the error itself. With your function
fn bar(f : &dyn Foo) {
it would be expected that you could call f.gen()
(given the current definition of Foo
), however that can't be supported because we don't known what type it would return! In the context of your specific code, it could be either A
or B
and in the general case, anything could implement that trait. That is why this gives
the trait Foo
cannot be made into an object
If it could be made into a trait object, code that tries to use the reference to the object wouldn't be well-defined, like f.gen()
.
Now, we can resolve this in at least one of two ways. I don't understand the difference between the two and what circumstance either should be used or if another method should be used.
fn gen(&mut self) -> &Self where Self : Sized;
This function, because it now has a limit on Self
, actually can't be used by your bar
function, because dyn Foo
is not Sized
. If you put that limit in place and try to call f.gen()
inside bar
you will get the error
the gen
method cannot be invoked on a trait object
fn bar<F>(f : &F) where F : Foo + ?Sized {
This approach addresses the issue because we actually do know what type f.gen()
would return (F
). Also note that this can be simplified to fn bar<F: Foo>(f : &F) {
or even fn bar(f : &impl Foo) {
.
Unless you're really super optimizing for performance, at least somewhat this is your preference. Would you prefer to pass a trait object, or need <F>
on every function the object is passed to?
More technical answer:
On the technical side, which you probably don't need to worry about, the tradeoff here is performance vs executable code size.
Your generic bar<F>
function, because the type F
is explicitly known inside the function, will actually create multiple copies of the bar
function in the compiled output executable, like if you'd instead done fn bar_A(f: &A) {
and fn bar_B(f: &B) {
. This process is called monomorphization
.
The upside of this process is that, because there are independent copies of the function, the compiler can optimize the function's code better, and the locations where the function is called could too, since the type of F
is known ahead of time. For instance, when you call f.eval()
, bar_A
will always call A::eval
and bar_B
will always call B::eval
, and when you call bar(aa.gen());
, it already knows that it is calling bar_a(aa.gen())
.
The downside here is that, if you had many types that implemented Foo
and you call bar
for all of them, you would be creating just as many copies of bar_XXX
for those types. That will make your final executable file larger, but potentially faster because the types where all known for the compiler to optimize and inline things.
On the other hand, if you go with fn bar(f : &dyn Foo) {
, these two points could end up flipped. Since there is only one copy of bar
in the executable, it doesn't know the type referenced by f
when it calls f.eval()
, which means that you miss out on potential compiler optimizationas and that your function needs to do dynamic dispatch. Where f : &F
knows the type F
, f: &dyn Foo
needs to look at metadata associated with f
to figure out which trait implementation's eval
to call.
This all means that for f: &dyn Foo
, your final executable will be smaller, which could be good for RAM usage, but it could be slower if bar
is called as part of the core logic loop of your application.
See What are the actual runtime performance costs of dynamic dispatch? for more explanation.