0

Say I have a git server. User X wants to add files A, B, C, D, E, and then commit, while user Y wants to add F, G, H, I, J, and then commit.

If they happen to operate in the same time, would it be possible that git add F, G, H, I, J be run during the git add A, B, C, D, E and commit -m 'user X commit' process so that the files added by Y be mixed into user X's commit (especially when files A ~ E are large and take much time to add and commit)?

If it's possible, is there a solution for this?

Danny Lin
  • 2,050
  • 1
  • 20
  • 34
  • Potentially related: http://stackoverflow.com/questions/13459219/is-it-safe-if-more-git-commands-are-run-on-the-same-repo-in-parallel – Hayley Oct 29 '13 at 17:12

1 Answers1

1

First, there is an execution delay between typing in git add and then git commit. So assuming the git add were performed "simlutaneously", the first user who's git commit would be executed would commit all files and the other user would be issued an error to the effect that there is nothing to commit.

This all assumes that both "users" operate as the same "user" on the same directory. Which has nothing to do with the real git workflow where each "user" is it's own user and operate on it's own cloned version of a common repo. The problem you refer to could occur at the git push level. However, the first user who's "push" would be received on the remote repo would be updated as expected and the other user would be issued an error "push denied" since then there would be an inconsistency in the tree and the "slower" user would have to apply his changes on the other user's commit. Which would be easy enough to resolve has it's garanteed there wouldn't be any conflicts.

I suggest you read: http://www.danielmiessler.com/study/git/ or any other git tutorials out there.

If you are talking about a deployment that is version-controlled that many people may acces simulaneously, I strongly recommend to change the workflow so that you use an external repo in which you publish the changes to via push and deploy on the common-used serve via git clone and then git pull:

  user1 local repo git push to --> common repo <-- git clone/pull on common server
  user2 local repo git push to ----^
Sebastien
  • 1,439
  • 14
  • 27
  • When I talked about the git server, I mean a backend of a php wiki system and the `git add` and `git commit` are sent by the php script on receiving specific user request via the http(s). It seems that it DOES conflict and there must be an additional lock mechanism in the php script to prevent such situation? – Danny Lin Oct 29 '13 at 17:29
  • We would all benefit if you could put those extra details in the above question. If the virtual users are all accessing directly the same machine, yes I guess the php code should lock. But you could still use my suggested architecture (event if the repos are from the same "machine-level" user in different paths). I doubt the delays would matter much in terms of performance. – Sebastien Oct 29 '13 at 17:43
  • A wiki system is an interface that allow users to edit online and the edit is sent as a git commit to a git repo on the server. In this situation, there is no "local repo" for the users, I think. An example of such git-backended web wiki system is GitHub's wiki for a repo. – Danny Lin Oct 29 '13 at 17:52
  • That pretty hard to guess that this is the info you are seeking from the formulation of your question. Don't you agree? What it is that you want to acheive with that question? Reverse engineer git hub or help you out with one of your actual implementation? Just trying to help here :) – Sebastien Oct 29 '13 at 18:08
  • What I want is to find out a mechanism for php to record something user sent to the git and doesn't conflict. (besides locking the whole system from writing when there's an operation to git) Hmmm... it will be very helpful if there's a source code of the GitHub wiki or a similar git-backended web wiki system, but I haven't find one out yet :( – Danny Lin Oct 29 '13 at 18:16