Alright, here's the scenario: A team of developers wants to ensure all new code matches the defined coding standards and all the unit tests are passing before a commit is accepted. Here's the trick, all of the tests need to run on a dedicated testing machine and we do not have access to modify the git server so this must be done using a local commit hook on each dev machine.
While the specs are pretty strict (we're not switching to windows or subversion, for example) this is a real world problem so there is some flexibility if you have a solution that almost fits.
- We're using Git and *nix.
- The updated code needs to be sent to another server to run the test suite.
- A list of modified files needs to be provided to ensure they match the coding standard.
- Its a rather large codebase, so we should send the smallest amount of information necessary to ensure identical copies of the codebase.
- If the tests fail a message needs to be displayed with the error and the commit should be blocked.
- Assume we trust our dev team and its okay to allow the tests to be bypassed with
--no-verify
option.
The question: What is the best way to get the test server to sync up with the local environment to run the tests? Some sort of hash-to-hash matching with a git patch for the new commit? Skip Git altogether and just do an rsync? Something else altogether?
Update 8/7/13: I shot myself in the foot by even mentioning the remote repo. The point isn't to block the code from being pushed to the shared / remote repo, its to prevent the local commit from even happening. Whether or not this would be considered a best practice is not really the point in this case, as this is specific to a small team of developers who all want this exact functionality. The question is about the best way to achieve the goal.