I am going to give some classes about C++ and data structures, and to check students' progress I'd like them to develop the structures I talk about. This is the common approach for data structures classes, I guess. But I want more, I want the students to have a quick feedback on what they are missing, so I developed several unit tests for the classes that check the behavior and give them instant results on what is wrong.
This has been working properly for the past two semesters, but I want a step further on that automatized correction. I've been studying how to check what are the internal components of a class, so I can know if someone has implemented correctly a tree with a node* root
and size_t size
and hasn't used additional not-necessary attributes, for instance.
I know that I can have a rough approximation of an object size with sizeof
, but the results are not that precise. It frequently is different from what I expect, for example: I tested creating a class with a pointer (8 bytes) and an int (4 bytes), but the sizeof
was 28. From what I learnt, probably this has something to do with virtual function table and other alignment stuff.
So, how far and further can I go analyzing if someone has coded a data structure the proper and expected manner? How can I check that someone just didn't #include <list>
and created an adaptor (for this I know I can just strip the includes but anyway)?
and created an adaptor"* - you might need to decide whether you're really content to giving students "quick feedback" vs. aiming for fully automated marking. If you're going to read over the submission you'll notice deliberate abuses like adapting `
– Tony Delroy Jul 24 '15 at 00:50`, and if they're trying to get away with it you don't owe them the favour of "catching" them early. Focusing effort on helping students who're genuinely trying to create a good implementation sounds more productive to me.