Related Post: Delete folder with items
How can more pointers in an array of pointers to struct be available for use than designated? For a learning tool for ftw/ftwn, I rewrote the ftwn solution in the related post (above) for ftw. Basically using the ftw callback to fill an array of structs containing the filename, type and depth for each file. File removal then proceeded from maxdepth to 0 removing files then directories along the way. This was a test, so printf
shows where unlink
or rmdir
should be called and the removal commands are never executed.
The storage for the array of structs was tried 3 different ways. (1) statically designating the number of pointers available struct _rmstat *rmstat [100];
(ftw 'nopenfd' set to 200), (2) dynamically allocating struct _rmstat **rmstat;
and finally (3) adding the information to a linked list. Testing the static allocation, I specifically chose test directories with less than 100 files and then directories with more that 100 files, to cause failure.
To my surprise, the statically allocated test would routinely handle directories containing well over 100 files, and up to as many as 450! How is that possible? I thought the static allocation struct _rmstat *rmstat [100];
should guarantee a segfault (or similar core dump) when the 101st struct assignment was attempted. Is there something in gcc that does this in stack/heap allocation? Or, is this just part of what 'undefined' behavior is? With ftw, I set 'nopenfd' greater than the available struct pointers, so I don't think this is the result of ftw limiting file descriptors and closing/reopening files.
I have searched, but can't find an explanation for how you possibly get more pointers than designated. Does anybody here know how this can happen?
The test program source is available. It is safe - it deletes NOTHING, just prints with printf
. Build with: gcc -Wall -o rmftws rmdir-ftw-static.c
Thanks for any insight you can provide.