1

Say I have 5 git projects that are interdependent on one another:

  1. Database (talks to server below)
  2. Rendering Server (talks to server below)
  3. Main App Server (REST API)
  4. Start-up: scripts and json for building the above Amazon 4 EC2 machines.
  5. Tests: run on App Server
  6. Javascript API (dependent on REST API)
  7. JS and REST API Documentation (dependent on REST API)

I assume the git project division above evolved in a practical manner to parallel workflow organization.

The problem is that the success of the whole system depends making sure the above 7 projects are consistent in a consistent state with one another.

On one hand, it appears easier to represent a change in the whole system by comparing just one project's time_1 SHA-1 hash with the same project's time_2 SHA-1.

On the other hand, I read these SO answers, which mention the cost of complexity of grouping too many projects. In addition, Linus also mentions the flexibility of the git system to even use branches for different projects, and also the ability to switch grouping strategy at a later date (but we wont do this approach since we are risk averse novices).

Any advice or guidelines, which takes into account an enterprise of high operational risk and low professional experience, is greatly appreciated.

Community
  • 1
  • 1
b_dev
  • 2,568
  • 6
  • 34
  • 43

1 Answers1

1

While a bit more complex, the "grouping" strategy has merits and is worth a shot:
It is called submodules and would allow you to define one unique reference for all your projects, while allowing you to manage those projects as independent git repos.

As I explain in "true nature of submodules", you still can make modifications in a submodule, as long as you record that new submodule state in the parent repo.

Community
  • 1
  • 1
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250