I'm setting up unit test framework using ctest and cmake. The idea is to have the test command executed in docker container and the test will execute inside the container. That is the requirement.
the add_test looks like this add_test(test_name, /bin/sh, runner.sh test_cmd)
where runner.sh is the script that runs the container and test_cmd is the test command that runs in container.
test_cmd is like this
/path/to/test/test_binary; CODE=$?; echo $CODE > /root/result.txt;
runner.sh has this code
docker exec -t -i --user root $CONTAINERNAME bash -c "test_cmd"
runner.sh further tries to read /root/result.txt from container.
runner.sh spawns new container for each test. Each test runs in its own container So there is no way they can interfere with one another when executed in parallel. /root/result.txt is separate for each container.
when I run the tests like this
make test ARGS="-j8"
for some specific tests /root/result.txt is not generated. Hence the reading fails from that file ( docker exec for test_cmd already returns ) And I cannot see stdout of those tests in LastTest.log
when I run the tests like this
make test ARGS="-j1"
All tests pass. /root/result.txt is generated for all tests and I can see the output (stdout) of those tests
same behavior is there for j > 1. Tests are not being timed out. I checked. My guess is that, before echo $CODE > /root/result.txt; I'm trying to read the exit status from /root/result.txt but again how does it pass in -j1 and in sh its sequential execution. Until one command exits it doesn't move ahead.
One interesting observation is that when I try to do it (docker exec, same command) from python script using subprocess instead of bash it works.
def executeViaSubprocess(cmd, doOutput=False):
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
retCode = p.returncode