1

I am writing a C++ library that has many adapters. It is easy to make an automated test that adapter foo can be applied to type a, just write the code foo and check that it compiles. For extra confidence, you can even check that it works as intended ;) There are many unit test frameworks that assist in this.

But it is just as important that certain things don't compile, for instance foo should not accept type b. Is there common wisdom about testing this in an automated way? One step could be to run the compiler and verify that it returns an error status (or doesn't produce an output file). But that would require one source file per test! Some macro magic could enable the combining of tests into one source file, but it would still require many compilation runs.

I can't believe I am the first person to struggle with this problem. So, what is a good way to organize and run such tests?

(In a previous life I wrote a small compiler that had a command line option -fail 12, which caused the compiler to return an error code unless it encountered an error or line 12. Maybe I should patch GCC to accept something similar.)

1 Answers1

0

It turns out what I want can (in a limited way) be done with C++ concepts. A concept tests whether its clauses are well-formed. If not, the concept returns false. This code checks that pin_in_out has a set( bool ) function, and that pin_in doesn't.

struct pin_in_out { static void set( bool v ){} };
struct pin_in{ };

template< typename T >
concept bool test_case = requires( bool v ) {
    { T::set( v ) } -> void;
};

int main(){
   std::cout << test_case< pin_in_out > << "\n";
   std::cout << test_case< pin_in > << "\n";
}

prints

1
0

The code required is a bit verbose (each test requires a separate conecpt), so I'll probably write a Python testcript-to-C++ translator.