1

I have to decide whether to use template vs virtual-inheritance.
In my situation, the trade-off make it really hard to choose.
Finally, it boiled down to "How much virtual-calling is really cost (CPU)?"

I found very few resources that dare to measure the vtable cost in actual number e.g. https://stackoverflow.com/a/158644, which point to page 26 of http://www.open-std.org/jtc1/sc22/wg21/docs/TR18015.pdf.

Here is an excerpt from it:-

However, this overhead (of virtual) is on the order of 20% and 12% – far less than the variability between compilers.

Before relying on the fact, I have decided to test it myself.

My test code is a little long (~ 40 lines), you can also see it in the links in action.
The number is ratio of time that virtual-calling used divided by normal-calling.
Unexpectedly, the result is contradict to what open-std stated.

Here is it :-

#include <iostream>
#include <chrono>
#include <vector>
using namespace std;
class B2{
        public: int randomNumber=((double) rand() / (RAND_MAX))*10;
        virtual ~B2() = default;
        virtual int f(int n){return -n+randomNumber;}
        int g(int n){return -n+randomNumber;}
};
class C : public B2{
    public: int f(int n) override {return n-randomNumber;}
};
int main() {
    std::vector<B2*> bs;
    const int numTest=1000000;
    for(int n=0;n<numTest;n++){
        if(((double) rand() / (RAND_MAX))>0.5){
            bs.push_back(new B2());
        }else{
            bs.push_back(new C());
        }
    };
    auto t1 = std::chrono::system_clock::now();
    int s=0;
    for(int n=0;n<numTest;n++){ 
        s+=bs[n]->f(n);
    };
    auto t2= std::chrono::system_clock::now();
    for(int n=0;n<numTest;n++){
        s+=bs[n]->g(n);
    };
    auto t3= std::chrono::system_clock::now();
    auto t21=t2-t1;
    auto t32=t3-t2;
    std::cout<<t21.count()<<" "<<t32.count()<<" ratio="<< (((float)t21.count())/t32.count()) << std::endl;
    std::cout<<s<<std::endl;
    for(int n=0;n<numTest;n++){
        delete bs[n];
    };
}

Question

Is it what to be expect that virtual calling is at least +50% slower than normal calling?
Did I test it in a wrong-way?

I have also read :-

Community
  • 1
  • 1
javaLover
  • 6,347
  • 2
  • 22
  • 67
  • 1
    [Make sure you're testing what you think your testing](http://stackoverflow.com/questions/16156130/why-is-my-program-so-slow/16156167). Also, while virtual functions are slower than non-virtual what percentage of your code is calling virtual functions? In other words, if 95% of your code is doing non-virtual things and only 95% of your code is making virtual function calls and virtual function calls take 1.5x the speed then your code is only running 1.025x slower with virtual functions than without. – gman May 17 '17 at 04:27
  • @gman I understand what you mean is like "prematurely-optimization is evil". In other words, I might not gain so much from optimization of this part. However, in the scope of this question, I still want to know it. – javaLover May 17 '17 at 04:31

0 Answers0